ADDRESSING BIG DATA WITH HADOOP?
Journal: International Journal of Computer Science and Mobile Computing - IJCSMC (Vol.3, No. 2)Publication Date: 2014-02-28
Authors : TWINKLE ANTONY SHAIJU PAUL;
Page : 459-462
Keywords : ;
Abstract
Nowadays, a large volume of data from various resources such as social media networks, sensory devices and other information serving devices are produced. This large collection of unstructured, semi structured data is called big data. The conventional databases and data ware houses can’t process this data. So we need new data processing tools. Hadoop addresses this need. Hadoop is an open source platform that provides distributed computing of big data. Hadoop composed of two components. A storage model called hadoop distributed file system and computing model called MapReduce. Map reducer, is a programming model for handling large complex task by doing two steps called map and reduce. In map stage the master node partition the problem into sub problems and distribute the task into worker nodes. The worker nodes pass the result to master node after solving the problem. In the reduce phase the master node reduce the answers of the sub problem to a final solution.
Other Latest Articles
- Estimation of Skeletal Maturity by Tanner and Whitehouse Method
- Comparison of Denoising Algorithms for Microarray Images
- A Secured Image with Pseudorandom Permutation Using longer bit with Chaotic Maps
- Classification of Cardiac Arrhythmias Using Heart Rate Variability Signal
- A comparative study of some images watermarking algorithms
Last modified: 2014-02-25 02:35:18