DUST Removal Framework Based on Improved Multiple Sequence Alignment Technique
Journal: International Journal of Science and Research (IJSR) (Vol.8, No. 8)Publication Date: 2019-08-05
Authors : Pulagam Sai Nandana; K N Brahmaji Rao;
Page : 975-979
Keywords : Search engines; Crawling; De-duplication; URL Normalization; Rewrite rules;
Abstract
A large number of URLs collected by web crawlers correspond to pages with duplicate or near-duplicate contents. These duplicate URLs, generically known as DUST (Different URLs with Similar Text), adversely impact search engines since crawling, storing and using such data imply waste of resources, the building of low quality rankings and poor user experiences. To deal with this problem, several studies have been proposed to detect and remove duplicate documents without fetching their contents. To accomplish this, the proposed methods learn normalization rules to transform all duplicate URLs into the same canonical form. This information can be used by crawlers to avoid fetching DUST. A challenging aspect of this strategy is to efficiently derive the minimum set of rules that achieve larger reductions with the smallest false positive rate. As most methods are based on pair wise analysis, the quality of the rules is affected by the criterion used to select the examples and the availability of representative examples in the training sets. To avoid processing large numbers of URLs, they employ techniques such as random sampling or by looking for DUST only within sites, preventing the generation of rules involving multiple DNS names. As a consequence of these issues, current methods are very susceptible to noise and, in many cases, derive rules that are very specific. In this thesis, we present a new approach to derive quality rules that take advantage of a multi-sequence alignment strategy. We demonstrate that a full multi-sequence alignment of URLs with duplicated content, before the generation of the rules, can lead to the deployment of very effective rules. Experimental results demonstrate that our approach achieved larger reductions in the number of duplicate URLs than our best baseline in two different web collections, in spite of being much faster. We also present a distributed version of our method, using the Map Reduce framework, and demonstrate its scalability by evaluating it using a set of 7.37 million URLs.
Other Latest Articles
- Structural Equation Model (SEM) on Causative Factors of Deviancy in Relation with Self Esteem and Academic Achievement of Deviant Students
- Passive Solar Building Designs: An Anticipated Approach towards Energy Efficiency
- Investigation of Physical, Chemical, and Structural Characterization of Areca Catechu.L Fiber
- Helminths of Perissodactyla of Karakalpakstan
- Kinetic Study of the Reaction of 5-Chlorosalicyaldehyde with M-Chloro Aniline Spctrophotometrically
Last modified: 2021-06-28 18:22:28