Learning Not to Regret

Authors

  • David Sychrovský Charles University Czech Technical University
  • Michal Šustr Czech Technical University EquiLibre Technologies
  • Elnaz Davoodi DeepMind
  • Michael Bowling University of Alberta
  • Marc Lanctot DeepMind
  • Martin Schmid Charles University EquiLibre Technologies

DOI:

https://doi.org/10.1609/aaai.v38i14.29443

Keywords:

ML: Online Learning & Bandits

Abstract

The literature on game-theoretic equilibrium finding predominantly focuses on single games or their repeated play. Nevertheless, numerous real-world scenarios feature playing a game sampled from a distribution of similar, but not identical games, such as playing poker with different public cards or trading correlated assets on the stock market. As these similar games feature similar equilibra, we investigate a way to accelerate equilibrium finding on such a distribution. We present a novel ``learning not to regret'' framework, enabling us to meta-learn a regret minimizer tailored to a specific distribution. Our key contribution, Neural Predictive Regret Matching, is uniquely meta-learned to converge rapidly for the chosen distribution of games, while having regret minimization guarantees on any game. We validated our algorithms' faster convergence on a distribution of river poker games. Our experiments show that the meta-learned algorithms outpace their non-meta-learned counterparts, achieving more than tenfold improvements.

Published

2024-03-24

How to Cite

Sychrovský, D., Šustr, M., Davoodi, E., Bowling, M., Lanctot, M., & Schmid, M. (2024). Learning Not to Regret. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15202-15210. https://doi.org/10.1609/aaai.v38i14.29443

Issue

Section

AAAI Technical Track on Machine Learning V