Structure Learning for Approximate Solution of Many-Player Games

Authors

  • Zun Li University of Michigan, Ann Arbor
  • Michael Wellman University of Michigan, Ann Arbor

DOI:

https://doi.org/10.1609/aaai.v34i02.5586

Abstract

Games with many players are difficult to solve or even specify without adopting structural assumptions that enable representation in compact form. Such structure is generally not given and will not hold exactly for particular games of interest. We introduce an iterative structure-learning approach to search for approximate solutions of many-player games, assuming only black-box simulation access to noisy payoff samples. Our first algorithm, K-Roles, exploits symmetry by learning a role assignment for players of the game through unsupervised learning (clustering) methods. Our second algorithm, G3L, seeks sparsity by greedy search over local interactions to learn a graphical game model. Both algorithms use supervised learning (regression) to fit payoff values to the learned structures, in compact representations that facilitate equilibrium calculation. We experimentally demonstrate the efficacy of both methods in reaching quality solutions and uncovering hidden structure, on both perfectly and approximately structured game instances.

Downloads

Published

2020-04-03

How to Cite

Li, Z., & Wellman, M. (2020). Structure Learning for Approximate Solution of Many-Player Games. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02), 2119-2127. https://doi.org/10.1609/aaai.v34i02.5586

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms