Improved Algorithms for Conservative Exploration in Bandits

Authors

  • Evrard Garcelon Facebook AI Research
  • Mohammad Ghavamzadeh Facebook AI Research
  • Alessandro Lazaric Facebook AI Research
  • Matteo Pirotta Facebook AI Research

DOI:

https://doi.org/10.1609/aaai.v34i04.5812

Abstract

In many fields such as digital marketing, healthcare, finance, and robotics, it is common to have a well-tested and reliable baseline policy running in production (e.g., a recommender system). Nonetheless, the baseline policy is often suboptimal. In this case, it is desirable to deploy online learning algorithms (e.g., a multi-armed bandit algorithm) that interact with the system to learn a better/optimal policy under the constraint that during the learning process the performance is almost never worse than the performance of the baseline itself. In this paper, we study the conservative learning problem in the contextual linear bandit setting and introduce a novel algorithm, the Conservative Constrained LinUCB (CLUCB2). We derive regret bounds for CLUCB2 that match existing results and empirically show that it outperforms state-of-the-art conservative bandit algorithms in a number of synthetic and real-world problems. Finally, we consider a more realistic constraint where the performance is verified only at predefined checkpoints (instead of at every step) and show how this relaxed constraint favorably impacts the regret and empirical performance of CLUCB2.

Downloads

Published

2020-04-03

How to Cite

Garcelon, E., Ghavamzadeh, M., Lazaric, A., & Pirotta, M. (2020). Improved Algorithms for Conservative Exploration in Bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3962-3969. https://doi.org/10.1609/aaai.v34i04.5812

Issue

Section

AAAI Technical Track: Machine Learning