Parallel web crawler architecture for clickstream analysis

The tremendous growth of the Web causes many challenges for single-process crawlers including the presence of some irrelevant answers among search results and the coverage and scaling issues. As a result, more robust algorithms needed to produce more precise and relevant search results in an appropr...

Full description

Saved in:
Bibliographic Details
Main Authors: Ahmadi-Abkenari, Fatemeh, Selamat, Ali
Format: Book Section
Published: Springer 2012
Subjects:
Online Access:http://eprints.utm.my/id/eprint/35741/
http://dx.doi.org/10.1007/978-3-642-32826-8_13
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The tremendous growth of the Web causes many challenges for single-process crawlers including the presence of some irrelevant answers among search results and the coverage and scaling issues. As a result, more robust algorithms needed to produce more precise and relevant search results in an appropriate timely manner. The existed Web crawlers mostly implement link dependent Web page importance metrics. One of the barriers of applying this metrics is that these metrics produce considerable communication overhead on the multi agent crawlers. Moreover, they suffer from the shortcoming of high dependency to their own index size that ends in their failure to rank Web pages with complete accuracy. Hence more enhanced metrics need to be addressed in this area. Proposing new Web page importance metric needs define a new architecture as a framework to implement the metric. The aim of this paper is to propose architecture for a focused parallel crawler. In this framework, the decision-making on Web page importance is based on a combined metric of clickstream analysis and context similarity analysis to the issued queries.