Big Data Processing Using Hadoop: Survey on Scheduling
Journal: International Journal of Science and Research (IJSR) (Vol.3, No. 10)Publication Date: 2014-10-05
Authors : Harshawardhan S. Bhosale; Devendra P. Gadekar;
Page : 272-277
Keywords : Big data; Hadoop; Map Reduce; Locality; Job Scheduling;
Abstract
The term Big Data describes innovative techniques and technologies to capture, store, distribute, manage and analyze petabyte- or larger-sized datasets with high-velocity and different structures. Big data can be structured, unstructured or semi-structured, resulting in incapability of conventional data management methods. Big Data is a data whose scale, diversity, and complexity require new architecture, techniques, algorithms, and analytics to manage it and extract value and hidden knowledge from it. In order to process large amounts of data in an inexpensive and efficient way, open source software called Hadoop is used. Hadoop enables the distributed processing of large data sets across clusters of commodity servers. Hadoop uses FIFO as default scheduling algorithm for execution of jobs. Performance of Hadoop can be increased by using appropriate scheduling algorithms. The objective of the research is to study and analyze various scheduling algorithms which can be used in Hadoop for better performance.
Other Latest Articles
- Male Infertility As Seen in University of Maiduguri Teaching Hospital North-Eastern Nigeria
- Modification of Abrasive Wear Testing Machine and Testing of Materials
- Systematic Surveys of Root-Knot Nematodes from Rice and Soybean Fields of Pakistan
- Comparative Performance of Neural Network and Wavelet Based for Path Loss Prediction on Global System for Mobile Communication (GSM) in an Urban Environment
- An Analytical Study on the Phytoresources and Vegetation Ecology of Coastal Medinipur of West Bengal in India
Last modified: 2021-06-30 21:10:56