Mixed-Effects Contextual Bandits


  • Kyungbok Lee Department of Statistics, Seoul National University
  • Myunghee Cho Paik Department of Statistics, Seoul National University Shepherd23 Inc.
  • Min-hwan Oh Graduate School of Data Science, Seoul National University
  • Gi-Soo Kim Department of Industrial Engineering, Ulsan National Institute of Science and Technology




ML: Online Learning & Bandits


We study a novel variant of a contextual bandit problem with multi-dimensional reward feedback formulated as a mixed-effects model, where the correlations between multiple feedback are induced by sharing stochastic coefficients called random effects. We propose a novel algorithm, Mixed-Effects Contextual UCB (ME-CUCB), achieving tildeO(d sqrt(mT)) regret bound after T rounds where d is the dimension of contexts and m is the dimension of outcomes, with either known or unknown covariance structure. This is a tighter regret bound than that of the naive canonical linear bandit algorithm ignoring the correlations among rewards. We prove a lower bound of Omega(d sqrt(mT)) matching the upper bound up to logarithmic factors. To our knowledge, this is the first work providing a regret analysis for mixed-effects models and algorithms involving weighted least-squares estimators. Our theoretical analysis faces a significant technical challenge in that the error terms do not constitute martingales since the weights depend on the rewards. We overcome this challenge by using covering numbers, of theoretical interest in its own right. We provide numerical experiments demonstrating the advantage of our proposed algorithm, supporting the theoretical claims.



How to Cite

Lee, K., Paik, M. C., Oh, M.- hwan, & Kim, G.-S. (2024). Mixed-Effects Contextual Bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13409-13417. https://doi.org/10.1609/aaai.v38i12.29243



AAAI Technical Track on Machine Learning III