Performance Evaluation of Hadoop Distributed File System and Local File System
Journal: International Journal of Science and Research (IJSR) (Vol.3, No. 9)Publication Date: 2014-09-05
Authors : Linthala Srinithya; G. Venkata Rami Reddy;
Page : 1174-1183
Keywords : HDFS; LFS; DFS; Read; Write; Update; Commodity hardware; fault-tolerant;
Abstract
Hadoop is a framework which enables applications to work and petabytes of data on large clusters with thousand of nodes built of commodity hardware. It provides a Hadoop Distributed File System (HDFS) that stores data on the computed nodes, providing very high aggregate bandwidth across the cluster. In addition, Hadoop implements a parallel computational paradigm named map-reduce which divides the application into many small segments of work, each of which may be executed or re executed on any node in the cluster. In this project I would like to analyze performance of HDFS and LFS with respective read and write. To measure the performance I will set up a hadoop cluster and design an interface which gives us the size of the file, time taken for upload or download from Local File System (LFS) and Hadoop Distributed File System (HDFS). By literature survey the expected HDFS writing performance scales will on both small and big data set were it is however lower than on the small data-set. This work also draws a comparison between the HDFS (Hadoop Distributed File System) and LFS performances.
Other Latest Articles
- Knowledge and Practice on Safety measures among Fishermen of Udupi District
- Impacts of Emotional Publicity&Brand Image on Consumer Buying Behavior in Pakistan: A Case Study of Jauharabad City District Khushab
- Route Optimization using Membrane Computing in Opportunistic Network
- Salvage of a Limb by Ilizarov Angiogenesis - An Interesting Case Report
- Immunological Study of Women Infected with Trichomonas vaginalis Parasite in Baghdad city
Last modified: 2021-06-30 21:07:44