Macro Action Selection with Deep Reinforcement Learning in StarCraft

Authors

  • Sijia Xu Bilibili
  • Hongyu Kuang Nanjing University
  • Zhuang Zhi Bilibili
  • Renjie Hu Bilibili
  • Yang Liu Bilibili
  • Huyang Sun Bilibili

DOI:

https://doi.org/10.1609/aiide.v15i1.5230

Abstract

StarCraft (SC) is one of the most popular and successful Real Time Strategy (RTS) games. In recent years, SC is also widely accepted as a challenging testbed for AI research because of its enormous state space, partially observed information, multi-agent collaboration, and so on. With the help of annual AIIDE and CIG competitions, a growing number of SC bots are proposed and continuously improved. However, a large gap remains between the top-level bot and the professional human player. One vital reason is that current SC bots mainly rely on predefined rules to select macro actions during their games. These rules are not scalable and efficient enough to cope with the enormous yet partially observed state space in the game. In this paper, we propose a deep reinforcement learning (DRL) framework to improve the selection of macro actions. Our framework is based on the combination of the Ape-X DQN and the Long-Short-Term-Memory (LSTM). We use this framework to build our bot, named as LastOrder. Our evaluation, based on training against all bots from the AIIDE 2017 StarCraft AI competition set, shows that LastOrder achieves an 83% winning rate, outperforming 26 bots in total 28 entrants.

Downloads

Published

2019-10-08

How to Cite

Xu, S., Kuang, H., Zhi, Z., Hu, R., Liu, Y., & Sun, H. (2019). Macro Action Selection with Deep Reinforcement Learning in StarCraft. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 15(1), 94-99. https://doi.org/10.1609/aiide.v15i1.5230