Thompson Sampling for Stochastic Bandits with Graph Feedback

Authors

  • Aristide Tossou Chalmers University of Technology
  • Christos Dimitrakakis University of Lille, and Chalmers University of Technology
  • Devdatt Dubhashi Chalmers University of Technology

DOI:

https://doi.org/10.1609/aaai.v31i1.10897

Keywords:

Thompson sampling, Stochastic multi-armed bandit, graphical learning, online learning

Abstract

We present a simple set of algorithms based on Thompson Sampling for stochastic bandit problems with graph feedback. Thompson Sampling is generally applicable, without the need to construct complicated upper confidence bounds. As we show in this paper, it has excellent performance in problems with graph feedback, even when the graph structure itself is unknown and/or changing. We provide theoretical guarantees on the Bayesian regret of the algorithm, as well as extensive experi- mental results on real and simulated networks. More specifically, we tested our algorithms on power law, planted partitions and Erdo's–Rényi graphs, as well as on graphs derived from Facebook and Flixster data and show that they clearly outperform related methods that employ upper confidence bounds.

Downloads

Published

2017-02-13

How to Cite

Tossou, A., Dimitrakakis, C., & Dubhashi, D. (2017). Thompson Sampling for Stochastic Bandits with Graph Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10897