MDPGT: Momentum-Based Decentralized Policy Gradient Tracking

Authors

  • Zhanhong Jiang Johnson Controls Inc.
  • Xian Yeow Lee Iowa State University
  • Sin Yong Tan Iowa State University
  • Kai Liang Tan Iowa State University
  • Aditya Balu Iowa State University
  • Young M Lee Johnson Controls Inc.
  • Chinmay Hegde New York University
  • Soumik Sarkar Iowa State University

DOI:

https://doi.org/10.1609/aaai.v36i9.21169

Keywords:

Multiagent Systems (MAS)

Abstract

We propose a novel policy gradient method for multi-agent reinforcement learning, which leverages two different variance-reduction techniques and does not require large batches over iterations. Specifically, we propose a momentum-based decentralized policy gradient tracking (MDPGT) where a new momentum-based variance reduction technique is used to approximate the local policy gradient surrogate with importance sampling, and an intermediate parameter is adopted to track two consecutive policy gradient surrogates. MDPGT provably achieves the best available sample complexity of O(N -1 e -3) for converging to an e-stationary point of the global average of N local performance functions (possibly nonconcave). This outperforms the state-of-the-art sample complexity in decentralized model-free reinforcement learning and when initialized with a single trajectory, the sample complexity matches those obtained by the existing decentralized policy gradient methods. We further validate the theoretical claim for the Gaussian policy function. When the required error tolerance e is small enough, MDPGT leads to a linear speed up, which has been previously established in decentralized stochastic optimization, but not for reinforcement learning. Lastly, we provide empirical results on a multi-agent reinforcement learning benchmark environment to support our theoretical findings.

Downloads

Published

2022-06-28

How to Cite

Jiang, Z., Lee, X. Y., Tan, S. Y., Tan, K. L., Balu, A., Lee, Y. M., Hegde, C., & Sarkar, S. (2022). MDPGT: Momentum-Based Decentralized Policy Gradient Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9377-9385. https://doi.org/10.1609/aaai.v36i9.21169

Issue

Section

AAAI Technical Track on Multiagent Systems