Towards Efficient Collaboration via Graph Modeling in Reinforcement Learning

Authors

  • Wenzhe Fan University of Illinois, Chicago
  • Zishun Yu University of Illinois, Chicago
  • Chengdong Ma Peking University
  • Changye Li Peking University
  • Yaodong Yang Peking University
  • Xinhua Zhang University of Illinois, Chicago

DOI:

https://doi.org/10.1609/aaai.v39i16.33813

Abstract

In multi-agent reinforcement learning, a commonly considered paradigm is centralized training with decentralized execution. However, in this framework, decentralized execution restricts the development of coordinated policies due to the local observation limitation. In this paper, we consider the cooperation among neighboring agents during execution and formulate their interactions as a graph. Thus, we introduce a novel encoder-decoder architecture named Factor-based Multi-Agent Transformer (f-MAT) that utilizes a transformer to enable communication between neighboring agents during both training and execution. By dividing agents into different overlapping groups and representing each group with a factor, f-MAT achieves efficient message passing and parallel action generation through factor-based attention layers. Empirical results in networked systems such as traffic scheduling and power control demonstrate that f-MAT achieves superior performance compared to strong baselines, thereby paving the way for handling complex collaborative problems.

Downloads

Published

2025-04-11

How to Cite

Fan, W., Yu, Z., Ma, C., Li, C., Yang, Y., & Zhang, X. (2025). Towards Efficient Collaboration via Graph Modeling in Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 16505–16513. https://doi.org/10.1609/aaai.v39i16.33813

Issue

Section

AAAI Technical Track on Machine Learning II