ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

PIONEER APPROACH DATA DEDUPLICATION TO REMOVE REDUNDANT DATA FROM CLOUD STORAGE

Journal: International Journal of Advanced Research in Engineering and Technology (IJARET) (Vol.11, No. 10)

Publication Date:

Authors : ;

Page : 535-544

Keywords : Deduplication; Block-level deduplication; Convergent Encryption; Filelevel deduplication;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Cloud is the newest technology in the IT world to provide vast virtual storage, storage and retention of information. Cloud Storage is the leading service in the cloud; it is used by all users at all levels. A challenge is to prevent the store of unnecessary and duplicated data of various users by cloud storage providers. Discharge from storage data replication is a vital task. Cloud data duplicate makes cloud storage management incompatible. This paper recommends that redundant cloud store data be eradicated. The proposed solution uses file and block deduplication of data. For file level deductions, the data were reviewed. It separates blocks and deduplicates blocks if the data file is not duplicated. Deduplication for safe deduplication uses convergent key coding. A separate consumer with convergent encryption with the same key can encrypt the same data file to protect deductibility. Convergent key sharing is more attack prone Convergent key and other sensitive data are safely maintained; the paper introduces the Cloud-based Creating and Managing Keys (KGTMaaaS) service to remove duplicate data. This proposal removes redundant data from cloud storage effectively and saves unwelcome storage allotment, network bandwidth, and allows proper cloud storage.

Last modified: 2021-02-20 21:55:57