Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits
Keywords:Online Learning & Bandits
AbstractModifying the reward-biased maximum likelihood method originally proposed in the adaptive control literature, we propose novel learning algorithms to handle the explore-exploit trade-off in linear bandits problems as well as generalized linear bandits problems. We develop novel index policies that we prove achieve order-optimality, and show that they achieve empirical performance competitive with the state-of-the-art benchmark methods in extensive experiments. The new policies achieve this with low computation time per pull for linear bandits, and thereby resulting in both favorable regret as well as computational efficiency.
How to Cite
Hung, Y.-H., Hsieh, P.-C., Liu, X., & Kumar, P. R. (2021). Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7874-7882. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16961
AAAI Technical Track on Machine Learning II