An Efficient Deep Reinforcement Learning Algorithm for Solving Imperfect Information Extensive-Form Games

Authors

  • Linjian Meng Nanjing University
  • Zhenxing Ge Nanjing University
  • Pinzhuo Tian Shanghai University
  • Bo An Nanyang Technological University
  • Yang Gao Nanjing University

DOI:

https://doi.org/10.1609/aaai.v37i5.25722

Keywords:

GTEP: Imperfect Information

Abstract

One of the most popular methods for learning Nash equilibrium (NE) in large-scale imperfect information extensive-form games (IIEFGs) is the neural variants of counterfactual regret minimization (CFR). CFR is a special case of Follow-The-Regularized-Leader (FTRL). At each iteration, the neural variants of CFR update the agent's strategy via the estimated counterfactual regrets. Then, they use neural networks to approximate the new strategy, which incurs an approximation error. These approximation errors will accumulate since the counterfactual regrets at iteration t are estimated using the agent's past approximated strategies. Such accumulated approximation error causes poor performance. To address this accumulated approximation error, we propose a novel FTRL algorithm called FTRL-ORW, which does not utilize the agent's past strategies to pick the next iteration strategy. More importantly, FTRL-ORW can update its strategy via the trajectories sampled from the game, which is suitable to solve large-scale IIEFGs since sampling multiple actions for each information set is too expensive in such games. However, it remains unclear which algorithm to use to compute the next iteration strategy for FTRL-ORW when only such sampled trajectories are revealed at iteration t. To address this problem and scale FTRL-ORW to large-scale games, we provide a model-free method called Deep FTRL-ORW, which computes the next iteration strategy using model-free Maximum Entropy Deep Reinforcement Learning. Experimental results on two-player zero-sum IIEFGs show that Deep FTRL-ORW significantly outperforms existing model-free neural methods and OS-MCCFR.

Downloads

Published

2023-06-26

How to Cite

Meng, L., Ge, Z., Tian, P., An, B., & Gao, Y. (2023). An Efficient Deep Reinforcement Learning Algorithm for Solving Imperfect Information Extensive-Form Games. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 5823-5831. https://doi.org/10.1609/aaai.v37i5.25722

Issue

Section

AAAI Technical Track on Game Theory and Economic Paradigms