Big Data Hadoop: Aggregation Techniques
Journal: International Journal of Science and Research (IJSR) (Vol.4, No. 12)Publication Date: 2015-12-05
Authors : Vidya Pol;
Page : 432-435
Keywords : privacy preservation; security; e-healthcare systems; data mining; image feature extraction;
Abstract
The term Big Data, refers to data sets whose size (volume), complexity (variability), and rate of growth (velocity) make them difficult to capture, manage, process or analyzed. To analyze this enormous amount of data Hadoop can be used. However, processing is often time-consuming. One way to decrease response time is to executing the job partially, where an approximate, early result becomes available to the user, before completion of job. The implementation of the technique will be on top of Hadoop which will help to sample HDFS blocks uniformly. We will evaluate this technique using real-world datasets and applications and we will try to demonstrate the systems performance in terms of accuracy and time. The objective of the proposed technique is to significantly improve the performance of Hadoop MapReduce for efficient Big Data processing.
Other Latest Articles
- Household Amenities and Urban Infrastructure Development of Hisar City, 2010
- Personalized Search Engine for Mobiles Using Location Accuracy and Privacy
- Farmers? Perception and Adaptative Initiative to the Effect of Climate Change on Food Production in Abakaliki Local Government Area of Ebonyi State, Nigeria
- How Technology Affected Our Privacy
- Survey Paper on APT Malware Identification using Malicious DNS and Traffic Analysis
Last modified: 2021-07-01 14:28:06