Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games
Keywords:Multiagent Learning, Game Theory
AbstractThe predominant paradigm in evolutionary game theory and more generally online learning in games is based on a clear distinction between a population of dynamic agents that interact given a fixed, static game. In this paper, we move away from the artificial divide between dynamic agents and static games, to introduce and analyze a large class of competitive settings where both the agents and the games they play evolve strategically over time. We focus on arguably the most archetypal game-theoretic setting---zero-sum games (as well as network generalizations)---and the most studied evolutionary learning dynamic---replicator, the continuous-time analogue of multiplicative weights. Populations of agents compete against each other in a zero-sum competition that itself evolves adversarially to the current population mixture. Remarkably, despite the chaotic coevolution of agents and games, we prove that the system exhibits a number of regularities. First, the system has conservation laws of an information-theoretic flavor that couple the behavior of all agents and games. Secondly, the system is Poincare recurrent, with effectively all possible initializations of agents and games lying on recurrent orbits that come arbitrarily close to their initial conditions infinitely often. Thirdly, the time-average agent behavior and utility converge to the Nash equilibrium values of the time-average game. Finally, we provide a polynomial time algorithm to efficiently predict this time-average behavior for any such coevolving network game.
How to Cite
Skoulakis, S., Fiez, T., Sim, R., Piliouras, G., & Ratliff, L. (2021). Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11343-11351. https://doi.org/10.1609/aaai.v35i13.17352
AAAI Technical Track on Multiagent Systems