Bi-Level Actor-Critic for Multi-Agent Coordination

Authors

  • Haifeng Zhang University College London
  • Weizhe Chen Shanghai Jiao Tong University
  • Zeren Huang Shanghai Jiao Tong University
  • Minne Li University College London
  • Yaodong Yang Huawei R&D
  • Weinan Zhang Shanghai Jiao Tong University
  • Jun Wang University College London

DOI:

https://doi.org/10.1609/aaai.v34i05.6226

Abstract

Coordination is one of the essential problems in multi-agent systems. Typically multi-agent reinforcement learning (MARL) methods treat agents equally and the goal is to solve the Markov game to an arbitrary Nash equilibrium (NE) when multiple equilibra exist, thus lacking a solution for NE selection. In this paper, we treat agents unequally and consider Stackelberg equilibrium as a potentially better convergence point than Nash equilibrium in terms of Pareto superiority, especially in cooperative environments. Under Markov games, we formally define the bi-level reinforcement learning problem in finding Stackelberg equilibrium. We propose a novel bi-level actor-critic learning method that allows agents to have different knowledge base (thus intelligent), while their actions still can be executed simultaneously and distributedly. The convergence proof is given, while the resulting learning algorithm is tested against the state of the arts. We found that the proposed bi-level actor-critic algorithm successfully converged to the Stackelberg equilibria in matrix games and find a asymmetric solution in a highway merge environment.

Downloads

Published

2020-04-03

How to Cite

Zhang, H., Chen, W., Huang, Z., Li, M., Yang, Y., Zhang, W., & Wang, J. (2020). Bi-Level Actor-Critic for Multi-Agent Coordination. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7325-7332. https://doi.org/10.1609/aaai.v34i05.6226

Issue

Section

AAAI Technical Track: Multiagent Systems