Transition-Informed Reinforcement Learning for Large-Scale Stackelberg Mean-Field Games

Authors

  • Pengdeng Li School of Computer Science and Engineering, Nanyang Technological University, Singapore
  • Runsheng Yu Hong Kong University of Science and Technology, Hong Kong, China
  • Xinrun Wang School of Computer Science and Engineering, Nanyang Technological University, Singapore
  • Bo An School of Computer Science and Engineering, Nanyang Technological University, Singapore

DOI:

https://doi.org/10.1609/aaai.v38i16.29696

Keywords:

MAS: Multiagent Learning, ML: Reinforcement Learning

Abstract

Many real-world scenarios including fleet management and Ad auctions can be modeled as Stackelberg mean-field games (SMFGs) where a leader aims to incentivize a large number of homogeneous self-interested followers to maximize her utility. Existing works focus on cases with a small number of heterogeneous followers, e.g., 5-10, and suffer from scalability issue when the number of followers increases. There are three major challenges in solving large-scale SMFGs: i) classical methods based on solving differential equations fail as they require exact dynamics parameters, ii) learning by interacting with environment is data-inefficient, and iii) complex interaction between the leader and followers makes the learning performance unstable. We address these challenges through transition-informed reinforcement learning. Our main contributions are threefold: i) we first propose an RL framework, the Stackelberg mean-field update, to learn the leader's policy without priors of the environment, ii) to improve the data efficiency and accelerate the learning process, we then propose the Transition-Informed Reinforcement Learning (TIRL) by leveraging the instantiated empirical Fokker-Planck equation, and iii) we develop a regularized TIRL by employing various regularizers to alleviate the sensitivity of the learning performance to the initialization of the leader's policy. Extensive experiments on fleet management and food gathering demonstrate that our approach can scale up to 100,000 followers and significantly outperform existing baselines.

Published

2024-03-24

How to Cite

Li, P., Yu, R., Wang, X., & An, B. (2024). Transition-Informed Reinforcement Learning for Large-Scale Stackelberg Mean-Field Games. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17469-17476. https://doi.org/10.1609/aaai.v38i16.29696

Issue

Section

AAAI Technical Track on Multiagent Systems