Meta-Learning for Simple Regret Minimization

Authors

  • Javad Azizi University of Southern California
  • Branislav Kveton Amazon
  • Mohammad Ghavamzadeh Google Research
  • Sumeet Katariya Amazon

DOI:

https://doi.org/10.1609/aaai.v37i6.25823

Keywords:

ML: Online Learning & Bandits, ML: Meta Learning

Abstract

We develop a meta-learning framework for simple regret minimization in bandits. In this framework, a learning agent interacts with a sequence of bandit tasks, which are sampled i.i.d. from an unknown prior distribution, and learns its meta-parameters to perform better on future tasks. We propose the first Bayesian and frequentist meta-learning algorithms for this setting. The Bayesian algorithm has access to a prior distribution over the meta-parameters and its meta simple regret over m bandit tasks with horizon n is mere O(m / √n). On the other hand, the meta simple regret of the frequentist algorithm is O(n√m + m/ √n). While its regret is worse, the frequentist algorithm is more general because it does not need a prior distribution over the meta-parameters. It can also be analyzed in more settings. We instantiate our algorithms for several classes of bandit problems. Our algorithms are general and we complement our theory by evaluating them empirically in several environments.

Downloads

Published

2023-06-26

How to Cite

Azizi, J., Kveton, B., Ghavamzadeh, M., & Katariya, S. (2023). Meta-Learning for Simple Regret Minimization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 6709-6717. https://doi.org/10.1609/aaai.v37i6.25823

Issue

Section

AAAI Technical Track on Machine Learning I