ACE: Cooperative Multi-Agent Q-learning with Bidirectional Action-Dependency

Authors

  • Chuming Li The University of Sydney Shanghai Artificial Intelligence Laboratory
  • Jie Liu Shanghai Artificial Intelligence Laboratory
  • Yinmin Zhang The University of Sydney Shanghai Artificial Intelligence Laboratory
  • Yuhong Wei SenseTime Group LTD
  • Yazhe Niu Shanghai Artificial Intelligence Laboratory SenseTime Group LTD
  • Yaodong Yang Institute for AI, Peking University
  • Yu Liu Shanghai Artificial Intelligence Laboratory SenseTime Group LTD
  • Wanli Ouyang The University of Sydney Shanghai Artificial Intelligence Laboratory

DOI:

https://doi.org/10.1609/aaai.v37i7.26028

Keywords:

ML: Reinforcement Learning Algorithms, MAS: Coordination and Collaboration, MAS: Multiagent Learning

Abstract

Multi-agent reinforcement learning (MARL) suffers from the non-stationarity problem, which is the ever-changing targets at every iteration when multiple agents update their policies at the same time. Starting from first principle, in this paper, we manage to solve the non-stationarity problem by proposing bidirectional action-dependent Q-learning (ACE). Central to the development of ACE is the sequential decision making process wherein only one agent is allowed to take action at one time. Within this process, each agent maximizes its value function given the actions taken by the preceding agents at the inference stage. In the learning phase, each agent minimizes the TD error that is dependent on how the subsequent agents have reacted to their chosen action. Given the design of bidirectional dependency, ACE effectively turns a multi-agent MDP into a single-agent MDP. We implement the ACE framework by identifying the proper network representation to formulate the action dependency, so that the sequential decision process is computed implicitly in one forward pass. To validate ACE, we compare it with strong baselines on two MARL benchmarks. Empirical experiments demonstrate that ACE outperforms the state-of-the-art algorithms on Google Research Football and StarCraft Multi-Agent Challenge by a large margin. In particular, on SMAC tasks, ACE achieves 100% success rate on almost all the hard and super hard maps. We further study extensive research problems regarding ACE, including extension, generalization and practicability.

Downloads

Published

2023-06-26

How to Cite

Li, C., Liu, J., Zhang, Y., Wei, Y., Niu, Y., Yang, Y., Liu, Y., & Ouyang, W. (2023). ACE: Cooperative Multi-Agent Q-learning with Bidirectional Action-Dependency. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8536-8544. https://doi.org/10.1609/aaai.v37i7.26028

Issue

Section

AAAI Technical Track on Machine Learning II