Dynamic Thresholding and Pruning for Regret Minimization

Authors

  • Noam Brown Carnegie Mellon University
  • Christian Kroer Carnegie Mellon University
  • Tuomas Sandholm Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v31i1.10603

Keywords:

extensive-form game, equilibrium computation, regret minimization, convex optimization

Abstract

Regret minimization is widely used in determining strategies for imperfect-information games and in online learning. In large games, computing the regrets associated with a single iteration can be slow. For this reason, pruning — in which parts of the decision tree are not traversed in every iteration — has emerged as an essential method for speeding up iterations in large games. The ability to prune is a primary reason why the Counterfactual Regret Minimization (CFR) algorithm using regret matching has emerged as the most popular iterative algorithm for imperfect-information games, despite its relatively poor convergence bound. In this paper, we introduce dynamic thresholding, in which a threshold is set at every iteration such that any action in the decision tree with probability below the threshold is set to zero probability. This enables pruning for the first time in a wide range of algorithms. We prove that dynamic thresholding can be applied to Hedge while increasing its convergence bound by only a constant factor in terms of number of iterations. Experiments demonstrate a substantial improvement in performance for Hedge as well as the excessive gap technique.

Downloads

Published

2017-02-10

How to Cite

Brown, N., Kroer, C., & Sandholm, T. (2017). Dynamic Thresholding and Pruning for Regret Minimization. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10603

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms