Learning Deviation Payoffs in Simulation-Based Games

Authors

  • Samuel Sokota Swarthmore College
  • Caleb Ho Swarthmore College
  • Bryce Wiedenbeck Swarthmore College

DOI:

https://doi.org/10.1609/aaai.v33i01.33012173

Abstract

We present a novel approach for identifying approximate role-symmetric Nash equilibria in large simulation-based games. Our method uses neural networks to learn a mapping from mixed-strategy profiles to deviation payoffs—the expected values of playing pure-strategy deviations from those profiles. This learning can generalize from data about a tiny fraction of a game’s outcomes, permitting tractable analysis of exponentially large normal-form games. We give a procedure for iteratively refining the learned model with new data produced by sampling in the neighborhood of each candidate Nash equilibrium. Relative to the existing state of the art, deviation payoff learning dramatically simplifies the task of computing equilibria and more effectively addresses player asymmetries. We demonstrate empirically that deviation payoff learning identifies better approximate equilibria than previous methods and can handle more difficult settings, including games with many more players, strategies, and roles.

Downloads

Published

2019-07-17

How to Cite

Sokota, S., Ho, C., & Wiedenbeck, B. (2019). Learning Deviation Payoffs in Simulation-Based Games. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2173-2180. https://doi.org/10.1609/aaai.v33i01.33012173

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms