Disposable Linear Bandits for Online Recommendations

Authors

  • Melda Korkut Carnegie Mellon University
  • Andrew Li Carnegie Mellon University

Keywords:

Recommender Systems & Collaborative Filtering, Online Learning & Bandits

Abstract

We study the classic stochastic linear bandit problem under the restriction that each arm may be selected for limited number of times. This simple constraint, which we call disposability, captures a common restriction that occurs in recommendation problems from a diverse array of applications ranging from personalized styling services to dating platforms. We show that the regret for this problem is characterized by a previously-unstudied function of the reward distribution among optimal arms. Algorithmically, our upper bound relies on an optimism-based policy which, while computationally intractable, lends itself to approximation via a fast alternating heuristic initialized with a classic similarity score. Experiments show that our policy dominates a set of benchmarks which includes algorithms known to be optimal for the linear bandit without disposability, along with natural modifications to these algorithms for the disposable setting.

Downloads

Published

2021-05-18

How to Cite

Korkut, M., & Li, A. (2021). Disposable Linear Bandits for Online Recommendations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5), 4172-4180. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16540

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management