ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Implementation of the Map Reduce Paradigm Techniques in Big Data

Journal: International Journal of Scientific Engineering and Research (IJSER) (Vol.3, No. 6)

Publication Date:

Authors : ; ;

Page : 108-113

Keywords : MapReduce; Hadoop; BigData; Google;

Source : Downloadexternal Find it from : Google Scholarexternal


This MapReduce methodology has been popularized by use at Google, was recently proprietary by Google to be used on clusters and authorized to Apache, and is currently being developed by an intensive community of researchers. MapReduce is a programming model initiated by Google?s Team for processing huge datasets in distributed systems; it helps programmers to write programs that process big data. MapReduce is a programming model for processing and generating large data sets. Users specify a map operate that processes a key/value combine to get a group of intermediate key/value pairs and a scale back operate that merges all intermediate values related to constant intermediate key. Using MapReduce programming paradigm the big data is processed. Big data sizes are a constantly moving target currently ranging from a few dozen terabytes to many petabytes of data in a single data set. MapReduce has been seen as one of the key enabling approaches for meeting continuously increasing demands on computing resources imposed by massive data sets. The reason for this is often the high scalability of the MapReduce paradigm that permits for massively parallel and distributed execution over an large number of computing nodes. The volume of data with the speed it is generated makes it difficult for the current computing infrastructure to handle big data. To overcome this drawback, big data processing can be performed through a programming paradigm known as MapReduce. Typical, implementation of the MapReduce paradigm requires networked attached storage and parallel processing. Hadoop and HDFS by apache is widely used for storing and managing big data.

Last modified: 2021-07-08 15:24:22