Incentivized Exploration for Multi-Armed Bandits under Reward Drift

Authors

  • Zhiyuan Liu University of Colorado, Boulder
  • Huazheng Wang University of Virginia
  • Fan Shen University of Colorado, Boulder
  • Kai Liu Clemson University
  • Lijun Chen University of Colorado, Boulder

DOI:

https://doi.org/10.1609/aaai.v34i04.5937

Abstract

We study incentivized exploration for the multi-armed bandit (MAB) problem where the players receive compensation for exploring arms other than the greedy choice and may provide biased feedback on reward. We seek to understand the impact of this drifted reward feedback by analyzing the performance of three instantiations of the incentivized MAB algorithm: UCB, ε-Greedy, and Thompson Sampling. Our results show that they all achieve O(log T) regret and compensation under the drifted reward, and are therefore effective in incentivizing exploration. Numerical examples are provided to complement the theoretical analysis.

Downloads

Published

2020-04-03

How to Cite

Liu, Z., Wang, H., Shen, F., Liu, K., & Chen, L. (2020). Incentivized Exploration for Multi-Armed Bandits under Reward Drift. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4981-4988. https://doi.org/10.1609/aaai.v34i04.5937

Issue

Section

AAAI Technical Track: Machine Learning