Variance Reduction in Monte Carlo Counterfactual Regret Minimization (VR-MCCFR) for Extensive Form Games Using Baselines

Authors

  • Martin Schmid DeepMind
  • Neil Burch DeepMind
  • Marc Lanctot Deepmind
  • Matej Moravcik DeepMInd
  • Rudolf Kadlec Google DeepMind
  • Michael Bowling DeepMind

DOI:

https://doi.org/10.1609/aaai.v33i01.33012157

Abstract

Learning strategies for imperfect information games from samples of interaction is a challenging problem. A common method for this setting, Monte Carlo Counterfactual Regret Minimization (MCCFR), can have slow long-term convergence rates due to high variance. In this paper, we introduce a variance reduction technique (VR-MCCFR) that applies to any sampling variant of MCCFR. Using this technique, periteration estimated values and updates are reformulated as a function of sampled values and state-action baselines, similar to their use in policy gradient reinforcement learning. The new formulation allows estimates to be bootstrapped from other estimates within the same episode, propagating the benefits of baselines along the sampled trajectory; the estimates remain unbiased even when bootstrapping from other estimates. Finally, we show that given a perfect baseline, the variance of the value estimates can be reduced to zero. Experimental evaluation shows that VR-MCCFR brings an order of magnitude speedup, while the empirical variance decreases by three orders of magnitude. The decreased variance allows for the first time CFR+ to be used with sampling, increasing the speedup to two orders of magnitude.

Downloads

Published

2019-07-17

How to Cite

Schmid, M., Burch, N., Lanctot, M., Moravcik, M., Kadlec, R., & Bowling, M. (2019). Variance Reduction in Monte Carlo Counterfactual Regret Minimization (VR-MCCFR) for Extensive Form Games Using Baselines. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2157-2164. https://doi.org/10.1609/aaai.v33i01.33012157

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms