ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Hadoop: Understanding the Big Data Processing Method

Journal: International Journal of Science and Research (IJSR) (Vol.4, No. 3)

Publication Date:

Authors : ; ; ;

Page : 1620-1624

Keywords : Apache Hadoop; big data; Java; Google File System; Google MapReduce; open source; MapR; Oracle; distributed file system; HDFS; redundant array of inexpensive disks; replication factor; NameNode; DataNode; MapReduce; JobTracker; TaskTracke; r yet another r;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Every day, we create 2.5 quintillion bytes of data so much that 90 % of the data in the world today has been created in the last two years alone. This data comes from everywhere sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data. Big data requires different approaches Techniques, tools, architecture and data processing methods. The main focus of the paper is to draw the state-of-the-art techniques and technologies for Big Data processing with the help of Big Data application- Hadoop purchase transaction records, and cell phone GPS signals to name a few. This data is big data. Big data requires different approaches Techniques, tools, architecture and data processing methods. The main focus of the paper is to draw the state-of-the-art techniques and technologies for Big Data processing with the help of Big Data application- Hadoop

Last modified: 2021-06-30 21:34:49