Learning to Crawl

Authors

  • Utkarsh Upadhyay Reasonal
  • Robert Busa-Fekete Google Research
  • Wojciech Kotlowski Poznan University of Technology
  • David Pal Yahoo! Research
  • Balazs Szorenyi Yahoo! Research

DOI:

https://doi.org/10.1609/aaai.v34i04.6067

Abstract

Web crawling is the problem of keeping a cache of webpages fresh, i.e., having the most recent copy available when a page is requested. This problem is usually coupled with the natural restriction that the bandwidth available to the web crawler is limited. The corresponding optimization problem was solved optimally by Azar et al. (2018) under the assumption that, for each webpage, both the elapsed time between two changes and the elapsed time between two requests follows a Poisson distribution with known parameters. In this paper, we study the same control problem but under the assumption that the change rates are unknown a priori, and thus we need to estimate them in an online fashion using only partial observations (i.e., single-bit signals indicating whether the page has changed since the last refresh). As a point of departure, we characterise the conditions under which one can solve the problem with such partial observability. Next, we propose a practical estimator and compute confidence intervals for it in terms of the elapsed time between the observations. Finally, we show that the explore-and-commit algorithm achieves an O(√T) regret with a carefully chosen exploration horizon. Our simulation study shows that our online policy scales well and achieves close to optimal performance for a wide range of parameters.

Downloads

Published

2020-04-03

How to Cite

Upadhyay, U., Busa-Fekete, R., Kotlowski, W., Pal, D., & Szorenyi, B. (2020). Learning to Crawl. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6046-6053. https://doi.org/10.1609/aaai.v34i04.6067

Issue

Section

AAAI Technical Track: Machine Learning