Multinomial Logit Contextual Bandits: Provable Optimality and Practicality

Authors

  • Min-hwan Oh Seoul National University
  • Garud Iyengar Columbia University

Keywords:

Online Learning & Bandits, Reinforcement Learning

Abstract

We consider a sequential assortment selection problem where the user choice is given by a multinomial logit (MNL) choice model whose parameters are unknown. In each period, the learning agent observes a d-dimensional contextual information about the user and the N available items, and offers an assortment of size K to the user, and observes the bandit feedback of the item chosen from the assortment. We propose upper confidence bound based algorithms for this MNL contextual bandit. The first algorithm is a simple and practical method that achieves an O(d√T) regret over T rounds. Next, we propose a second algorithm which achieves a O(√dT) regret. This matches the lower bound for the MNL bandit problem, up to logarithmic terms, and improves on the best-known result by a √d factor. To establish this sharper regret bound, we present a non-asymptotic confidence bound for the maximum likelihood estimator of the MNL model that may be of independent interest as its own theoretical contribution. We then revisit the simpler, significantly more practical, first algorithm and show that a simple variant of the algorithm achieves the optimal regret for a broad class of important applications.

Downloads

Published

2021-05-18

How to Cite

Oh, M.- hwan, & Iyengar, G. (2021). Multinomial Logit Contextual Bandits: Provable Optimality and Practicality. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9205-9213. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17111

Issue

Section

AAAI Technical Track on Machine Learning III