Learning from Good Trajectories in Offline Multi-Agent Reinforcement Learning

Authors

  • Qi Tian Zhejiang Univerisity
  • Kun Kuang Zhejiang University
  • Furui Liu Huawei Noah's Ark Lab
  • Baoxiang Wang The Chinese University of Hong Kong, Shenzhen

DOI:

https://doi.org/10.1609/aaai.v37i10.26379

Keywords:

MAS: Multiagent Learning, MAS: Coordination and Collaboration

Abstract

Offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets, which is an important step toward the deployment of multi-agent systems in real-world applications. However, in practice, each individual behavior policy that generates multi-agent joint trajectories usually has a different level of how well it performs. e.g., an agent is a random policy while other agents are medium policies. In the cooperative game with global reward, one agent learned by existing offline MARL often inherits this random policy, jeopardizing the utility of the entire team. In this paper, we investigate offline MARL with explicit consideration on the diversity of agent-wise trajectories and propose a novel framework called Shared Individual Trajectories (SIT) to address this problem. Specifically, an attention-based reward decomposition network assigns the credit to each agent through a differentiable key-value memory mechanism in an offline manner. These decomposed credits are then used to reconstruct the joint offline datasets into prioritized experience replay with individual trajectories, thereafter agents can share their good trajectories and conservatively train their policies with a graph attention network (GAT) based critic. We evaluate our method in both discrete control (i.e., StarCraft II and multi-agent particle environment) and continuous control (i.e., multi-agent mujoco). The results indicate that our method achieves significantly better results in complex and mixed offline multi-agent datasets, especially when the difference of data quality between individual trajectories is large.

Downloads

Published

2023-06-26

How to Cite

Tian, Q., Kuang, K., Liu, F., & Wang, B. (2023). Learning from Good Trajectories in Offline Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 11672-11680. https://doi.org/10.1609/aaai.v37i10.26379

Issue

Section

AAAI Technical Track on Multiagent Systems