A Deeper Understanding of State-Based Critics in Multi-Agent Reinforcement Learning

Authors

  • Xueguang Lyu Northeastern University, Boston, MA
  • Andrea Baisero Northeastern University, Boston, MA
  • Yuchen Xiao Northeastern University, Boston, MA
  • Christopher Amato Northeastern University, Boston, MA

DOI:

https://doi.org/10.1609/aaai.v36i9.21171

Keywords:

Multiagent Systems (MAS)

Abstract

Centralized Training for Decentralized Execution, where training is done in a centralized offline fashion, has become a popular solution paradigm in Multi-Agent Reinforcement Learning. Many such methods take the form of actor-critic with state-based critics, since centralized training allows access to the true system state, which can be useful during training despite not being available at execution time. State-based critics have become a common empirical choice, albeit one which has had limited theoretical justification or analysis. In this paper, we show that state-based critics can introduce bias in the policy gradient estimates, potentially undermining the asymptotic guarantees of the algorithm. We also show that, even if the state-based critics do not introduce any bias, they can still result in a larger gradient variance, contrary to the common intuition. Finally, we show the effects of the theories in practice by comparing different forms of centralized critics on a wide range of common benchmarks, and detail how various environmental properties are related to the effectiveness of different types of critics.

Downloads

Published

2022-06-28

How to Cite

Lyu, X., Baisero, A., Xiao, Y., & Amato, C. (2022). A Deeper Understanding of State-Based Critics in Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9396-9404. https://doi.org/10.1609/aaai.v36i9.21171

Issue

Section

AAAI Technical Track on Multiagent Systems