ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Effective and Efficient XML Duplicate Detection Using Levenshtein Distance Algorithm

Journal: International Journal of Science and Research (IJSR) (Vol.4, No. 6)

Publication Date:

Authors : ; ;

Page : 2676-2680

Keywords : duplicate detection; electronic data; hierarchical data; XML data; XML document;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

There is big amount of work on discovering duplicates in relational data, merely elite findings concentrate on duplication in additional multifaceted hierarchical structures. Electronic information is one of the key factors in several business operations, applications, and determinations, at the same time as an outcome, guarantee its superiority is necessary. Duplicates are several delegacy of the identical real world thing which is dissimilar from each other. Duplicate finding a little assignment because of the actuality that duplicates are not accurately equivalent, frequently because of the errors in the information. Accordingly, many data processing techniques never apply widespread assessment algorithms which identify precise duplicates. As an alternative, evaluate all objective representations, by means of a probably compound identical approach, to identifying that the object is real world or not. Duplicate detection is applicable in data clean-up and data incorporation applications and which considered comprehensively for relational data or XML document. This paper it is suggested to use Levenshtein distance algorithm which is best and efficient than the previous Normalized Edit Distance (NED) algorithm. This paper will provide the person who reads with the groundwork for research in Duplicate Detection in XML data or Hierarchical Data.

Last modified: 2021-06-30 21:49:27