Multi-Agent Reinforcement Learning with General Utilities via Decentralized Shadow Reward Actor-Critic

Authors

  • Junyu Zhang National University of Singapore
  • Amrit Singh Bedi U.S. Army Research Laboratory
  • Mengdi Wang Princeton University/Deepmind
  • Alec Koppel Supply Chain Optimization Technologies, Amazon

DOI:

https://doi.org/10.1609/aaai.v36i8.20887

Keywords:

Machine Learning (ML)

Abstract

We posit a new mechanism for cooperation in multi-agent reinforcement learning (MARL) based upon any nonlinear function of the team's long-term state-action occupancy measure, i.e., a general utility. This subsumes the cumulative return but also allows one to incorporate risk-sensitivity, exploration, and priors. We derive the Decentralized Shadow Reward Actor-Critic (DSAC) in which agents alternate between policy evaluation (critic), weighted averaging with neighbors (information mixing), and local gradient updates for their policy parameters (actor). DSAC augments the classic critic step by requiring agents to (i) estimate their local occupancy measure in order to (ii) estimate the derivative of the local utility with respect to their occupancy measure, i.e., the ``shadow reward". DSAC converges to ϵ-stationarity in O(1/ϵ^2.5) or faster O(1/ϵ^2) steps with high probability, depending on the amount of communications. We further establish the non-existence of spurious stationary points for this problem, that is, DSAC finds the globally optimal policy. Experiments demonstrate the merits of goals beyond the cumulative return in cooperative MARL.

Downloads

Published

2022-06-28

How to Cite

Zhang, J., Bedi, A. S., Wang, M., & Koppel, A. (2022). Multi-Agent Reinforcement Learning with General Utilities via Decentralized Shadow Reward Actor-Critic. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9031-9039. https://doi.org/10.1609/aaai.v36i8.20887

Issue

Section

AAAI Technical Track on Machine Learning III