Faster and Resourceful Multi-core Web Crawling
Journal: International Journal of Science and Research (IJSR) (Vol.1, No. 2)Publication Date: 2012-11-05
Authors : Arun Kumar Dewangan; Asha Ambhaikar;
Page : 12-15
Keywords : Web crawling; multi core; indexing; parallel crawler; CPU; URL;
Abstract
Due to massive growth of World Wide Web, search engines have become crucial tools for navigation of web pages. In order to provide fast and powerful search facility, search engine maintains indexes for documents and its contents on the web by downloading web pages for processing in iterative manner. Web indexes are created and managed by web crawlers which work as a module of search engines and traverse the web in systematic manner for indexing its contents. A web crawling is a process which fetches data from various servers. It is a time taking process as it gathers data from various servers. Hence, to speed up the searching process in search engines the crawling should be fast enough. The aim of the proposed system is to enhance the speed of crawling process and CPU usage by use of multi-core concept.
Other Latest Articles
- Performance Assessment of Control Loops
- A New Algorithm to Improve the Sharing of Bandwidth
- A Study on the Myth and Reality of the Channel Preferences of Children
- History and Acquisitions of Salar Jung Museum Hyderabad-India
- Local Knowledge and Agricultural Sustainability: A Case Study of Pradhan Tribe in Adilabad District
Last modified: 2021-06-30 20:07:48