FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning

Authors

  • Yonghyeon Jo UNIST
  • Sunwoo Lee UNIST
  • Junghyuk Yeom UNIST
  • Seungyul Han UNIST

DOI:

https://doi.org/10.1609/aaai.v38i12.29196

Keywords:

ML: Reinforcement Learning, MAS: Multiagent Learning

Abstract

Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks. However, exploration still remains a challenging problem in MARL due to the partial observability of the agents and the exploration space that can grow exponentially as the number of agents increases. Firstly, in order to address the scalability issue of the exploration space, we define a formation-based equivalence relation on the exploration space and aim to reduce the search space by exploring only meaningful states in different formations. Then, we propose a novel formation-aware exploration (FoX) framework that encourages partially observable agents to visit the states in diverse formations by guiding them to be well aware of their current formation solely based on their own observations. Numerical results show that the proposed FoX framework significantly outperforms the state-of-the-art MARL algorithms on Google Research Football (GRF) and sparse Starcraft II multi-agent challenge (SMAC) tasks.

Published

2024-03-24

How to Cite

Jo, Y., Lee, S., Yeom, J., & Han, S. (2024). FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 12985–12994. https://doi.org/10.1609/aaai.v38i12.29196

Issue

Section

AAAI Technical Track on Machine Learning III