Reward-Biased Maximum Likelihood Estimation for Neural Contextual Bandits: A Distributional Learning Perspective

Authors

  • Yu-Heng Hung National Yang Ming Chiao Tung University, Hsinchu, Taiwan
  • Ping-Chun Hsieh National Yang Ming Chiao Tung University, Hsinchu, Taiwan

DOI:

https://doi.org/10.1609/aaai.v37i7.25961

Keywords:

ML: Online Learning & Bandits

Abstract

Reward-biased maximum likelihood estimation (RBMLE) is a classic principle in the adaptive control literature for tackling explore-exploit trade-offs. This paper studies the neural contextual bandit problem from a distributional perspective and proposes NeuralRBMLE, which leverages the likelihood of surrogate parametric distributions to learn the unknown reward distributions and thereafter adapts the RBMLE principle to achieve efficient exploration by properly adding a reward-bias term. NeuralRBMLE leverages the representation power of neural networks and directly encodes exploratory behavior in the parameter space, without constructing confidence intervals of the estimated rewards. We propose two variants of NeuralRBMLE algorithms: The first variant directly obtains the RBMLE estimator by gradient ascent, and the second variant simplifies RBMLE to a simple index policy through an approximation. We show that both algorithms achieve order-optimality. Through extensive experiments, we demonstrate that the NeuralRBMLE algorithms achieve comparable or better empirical regrets than the state-of-the-art methods on real-world datasets with non-linear reward functions.

Downloads

Published

2023-06-26

How to Cite

Hung, Y.-H., & Hsieh, P.-C. (2023). Reward-Biased Maximum Likelihood Estimation for Neural Contextual Bandits: A Distributional Learning Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 7944-7952. https://doi.org/10.1609/aaai.v37i7.25961

Issue

Section

AAAI Technical Track on Machine Learning II