BIG DATA
Journal: INTERNATIONAL JOURNAL OF ENGINEERING TECHNOLOGIES AND MANAGEMENT RESEARCH (Vol.5, No. 2)Publication Date: 2018-02-27
Authors : Abhishek Dubey;
Page : 9-13
Keywords : Big Data; Hadoop; HDFS; Map Reduce Architecture.;
Abstract
The term 'Big Data' portrays inventive methods and advances to catch, store, disseminate, oversee and break down petabyte-or bigger estimated sets of data with high-speed & diverted structures. Enormous information can be organized, non-structured or half-organized, bringing about inadequacy of routine information administration techniques. Information is produced from different distinctive sources and can touch base in the framework at different rates. With a specific end goal to handle this lot of information in an economical and proficient way, parallelism is utilized. Big Data is information whose scale, differences, and unpredictability require new engineering, methods, calculations, and investigation to oversee it and concentrate esteem and concealed learning from it. Hadoop is the center stage for organizing Big Data, and takes care of the issue of making it valuable for examination purposes. Hadoop is an open source programming venture that empowers the dispersed handling of huge information sets crosswise over bunches of ware servers. It is intended to scale up from a solitary server to a huge number of machines, with a high level of adaptation to non-critical failure.
Other Latest Articles
- BIG DATA ANALYSIS IN HEALTH CARE DOMAIN: A SYSTEMATIC REVIEW
- Migration as a challenge in cultural memory of the Russians
- Russian Orthodox Church as the actor of modern politics of memory: the canonization discourse
- Philosophy of P.I. Novgorodtsev’s legal theory and crisis of human being
- A portrait of Sholem Aleichem in the Polish-Jewish press
Last modified: 2018-04-29 14:30:01