Memory Asymmetry Creates Heteroclinic Orbits to Nash Equilibrium in Learning in Zero-Sum Games

Authors

  • Yuma Fujimoto SOKENDAI The University of Tokyo CyberAgent
  • Kaito Ariu CyberAgent KTH
  • Kenshi Abe CyberAgent The University of Electro-Communications

DOI:

https://doi.org/10.1609/aaai.v38i16.29688

Keywords:

MAS: Multiagent Learning, MAS: Other Foundations of Multi Agent Systems

Abstract

Learning in games considers how multiple agents maximize their own rewards through repeated games. Memory, an ability that an agent changes his/her action depending on the history of actions in previous games, is often introduced into learning to explore more clever strategies and discuss the decision-making of real agents like humans. However, such games with memory are hard to analyze because they exhibit complex phenomena like chaotic dynamics or divergence from Nash equilibrium. In particular, how asymmetry in memory capacities between agents affects learning in games is still unclear. In response, this study formulates a gradient ascent algorithm in games with asymmetry memory capacities. To obtain theoretical insights into learning dynamics, we first consider a simple case of zero-sum games. We observe complex behavior, where learning dynamics draw a heteroclinic connection from unstable fixed points to stable ones. Despite this complexity, we analyze learning dynamics and prove local convergence to these stable fixed points, i.e., the Nash equilibria. We identify the mechanism driving this convergence: an agent with a longer memory learns to exploit the other, which in turn endows the other's utility function with strict concavity. We further numerically observe such convergence in various initial strategies, action numbers, and memory lengths. This study reveals a novel phenomenon due to memory asymmetry, providing fundamental strides in learning in games and new insights into computing equilibria.

Published

2024-03-24

How to Cite

Fujimoto, Y., Ariu, K., & Abe, K. (2024). Memory Asymmetry Creates Heteroclinic Orbits to Nash Equilibrium in Learning in Zero-Sum Games. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17398-17406. https://doi.org/10.1609/aaai.v38i16.29688

Issue

Section

AAAI Technical Track on Multiagent Systems