TY - JOUR
AU - Elie, Romuald
AU - Pérolat, Julien
AU - Laurière, Mathieu
AU - Geist, Matthieu
AU - Pietquin, Olivier
PY - 2020/04/03
Y2 - 2022/05/25
TI - On the Convergence of Model Free Learning in Mean Field Games
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 34
IS - 05
SE - AAAI Technical Track: Multiagent Systems
DO - 10.1609/aaai.v34i05.6203
UR - https://ojs.aaai.org/index.php/AAAI/article/view/6203
SP - 7143-7150
AB - <p>Learning by experience in Multi-Agent Systems (MAS) is a difficult and exciting task, due to the lack of stationarity of the environment, whose dynamics evolves as the population learns. In order to design scalable algorithms for systems with a large population of interacting agents (<em>e.g.</em>, swarms), this paper focuses on Mean Field MAS, where the number of agents is asymptotically infinite. Recently, a very active burgeoning field studies the effects of diverse reinforcement learning algorithms for agents with no prior information on a stationary Mean Field Game (MFG) and learn their policy through repeated experience. We adopt a high perspective on this problem and analyze in full generality the convergence of a fictitious iterative scheme using any single agent learning algorithm at each step. We quantify the quality of the computed approximate Nash equilibrium, in terms of the accumulated errors arising at each learning iteration step. Notably, we show for the first time convergence of model free learning algorithms towards non-stationary MFG equilibria, relying only on classical assumptions on the MFG dynamics. We illustrate our theoretical results with a numerical experiment in a continuous action-space environment, where the approximate best response of the iterative fictitious play scheme is computed with a deep RL algorithm.</p>
ER -