Learning Micro-Management Skills in RTS Games by Imitating Experts

Authors

  • Jay Young The University of Birmingham
  • Nick Hawes The University of Birmingham

DOI:

https://doi.org/10.1609/aiide.v10i1.12727

Keywords:

learning by observation, qualitative spatial relations, starcraft, real-time strategy, games, behaviour learning

Abstract

We investigate the problem of learning the control of small groups of units in combat situations in Real Time Strategy (RTS) games. AI systems may acquire such skills by observing and learning from expert players, or other AI systems performing those tasks. However, access to training data may be limited, and representations based on metric information -- position, velocity, orientation etc. -- may be brittle, difficult for learning mechanisms to work with, and generalise poorly to new situations. In this work we apply \textit{qualitative spatial relations} to compress such continuous, metric state-spaces into symbolic states, and show that this makes the learning problem easier, and allows for more general models of behaviour. Models learnt from this representation are used to control situated agents, and imitate the observed behaviour of both synthetic (pre-programmed) agents, as well as the behaviour of human-controlled agents on a number of canonical micro-management tasks. We show how a Monte-Carlo method can be used to decompress qualitative data back in to quantitative data for practical use in our control system. We present our work applied to the popular RTS game Starcraft.

Downloads

Published

2021-06-29

How to Cite

Young, J., & Hawes, N. (2021). Learning Micro-Management Skills in RTS Games by Imitating Experts. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 10(1), 195-201. https://doi.org/10.1609/aiide.v10i1.12727