Transformer-Based Value Function Decomposition for Cooperative Multi-Agent Reinforcement Learning in StarCraft

Authors

  • Muhammad Junaid Khan University of Central Florida
  • Syed Hammad Ahmed University of Central Florida
  • Gita Sukthankar University of Central Florida

DOI:

https://doi.org/10.1609/aiide.v18i1.21954

Keywords:

StarCraft, StarCraft Multi-agent Challenge, Multi-agent Reinforcement Learning, Transformers, Value Decomposition Functions

Abstract

The StarCraft II Multi-Agent Challenge (SMAC) was created to be a challenging benchmark problem for cooperative multi-agent reinforcement learning (MARL). SMAC focuses exclusively on the problem of StarCraft micromanagement and assumes that each unit is controlled individually by a learning agent that acts independently and only possesses local information; centralized training is assumed to occur with decentralized execution (CTDE). To perform well in SMAC, MARL algorithms must handle the dual problems of multi-agent credit assignment and joint action evaluation. This paper introduces a new architecture TransMix, a transformer-based joint action-value mixing network which we show to be efficient and scalable as compared to the other state-of-the-art cooperative MARL solutions. TransMix leverages the ability of transformers to learn a richer mixing function for combining the agents' individual value functions. It achieves comparable performance to previous work on easy SMAC scenarios and outperforms other techniques on hard scenarios, as well as scenarios that are corrupted with Gaussian noise to simulate fog of war.

Downloads

Published

2022-10-11

How to Cite

Khan, M. J., Ahmed, S. H., & Sukthankar, G. (2022). Transformer-Based Value Function Decomposition for Cooperative Multi-Agent Reinforcement Learning in StarCraft. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 18(1), 113-119. https://doi.org/10.1609/aiide.v18i1.21954