An Optimized Approach for Processing Small Files in HDFS
Journal: International Journal of Science and Research (IJSR) (Vol.6, No. 6)Publication Date: 2017-06-05
Authors : Deepika;
Page : 402-405
Keywords : Cloud storage; HDFS; Merging; Replica placement; sequenceFile;
Abstract
In Todays world cloud storage, has become an important part of the cloud computing system. Hadoop is an open-source software for computing huge number of data sets to facilitate storage, analyze, manage and access functionality in distributed systems across huge number of systems. Many of the user created data are of small files. HDFS is a distributed file system that manages the file processing across huge number of machines in distributed systems with minimum hardware requirement for computation. The performance of the HDFS degrades when it is handling the storage and access functionality of huge number of small files. This paper introduces the optimized strategies to handle small file processing in terms of storage and access efficiencies. Replication algorithms HAR and sequenceFile, merging algorithms, replica placement algorithms, Structurally-Related Small Files (SSF) - File Merging and Prefetching Scheme (FMP) and SSF-FMP with three level prefetching-catching technology. The proposed strategies help in effective increase of access and storage efficiency of small files. Inclemently shorten the time spent for reading and writing of small files when requested by clients.
Other Latest Articles
- Static Analysis of Tubular Testrig KT Joint
- Analysis of Gating System of a Casting to Increase the Productivity and Yield
- Patriarchal System in Aborigine Women in James Tucker's The Adventures of Ralph Rashleigh
- A Prospective Study on Clinical Outcome following Surgically Managed Displaced Proximal Humerus Fractures using Proximal Humerus Internal Locking System
- Teachers Perception on the Level of Preparedness in Implementing Early Grade Reading Programme in Kenya Public Primary Schools
Last modified: 2021-06-30 19:12:46