Big Data Processing Using Hadoop: Survey on SchedulingJournal: International Journal of Science and Research (IJSR) (Vol.3, No. 10)
Publication Date: 2014-10-05
Authors : Harshawardhan S. Bhosale; Devendra P. Gadekar;
Page : 272-277
Keywords : Big data; Hadoop; Map Reduce; Locality; Job Scheduling;
The term Big Data describes innovative techniques and technologies to capture, store, distribute, manage and analyze petabyte- or larger-sized datasets with high-velocity and different structures. Big data can be structured, unstructured or semi-structured, resulting in incapability of conventional data management methods. Big Data is a data whose scale, diversity, and complexity require new architecture, techniques, algorithms, and analytics to manage it and extract value and hidden knowledge from it. In order to process large amounts of data in an inexpensive and efficient way, open source software called Hadoop is used. Hadoop enables the distributed processing of large data sets across clusters of commodity servers. Hadoop uses FIFO as default scheduling algorithm for execution of jobs. Performance of Hadoop can be increased by using appropriate scheduling algorithms. The objective of the research is to study and analyze various scheduling algorithms which can be used in Hadoop for better performance.
Other Latest Articles
Last modified: 2021-06-30 21:10:56