Evolutionary Learning of Goal Priorities in a Real-Time Strategy Game

Authors

  • Jay Young The University of Birmingham, United Kingdom
  • Nick Hawes The University of Birmingham, United Kingdom

DOI:

https://doi.org/10.1609/aiide.v8i1.12503

Keywords:

motivation, learning, starcraft, goal generation

Abstract

We present a drive-based agent capable of playing the real-time strategy computer game Starcraft. Success at this task requires the ability to engage in autonomous, goal-directed behaviour, as well as techniques to manage the problem of potential goal conflicts. To address this, we show how a case-injected genetic algorithm can be used to learn goal priority profiles for use in goal management. This is achieved by learning how goals might be re-prioritised under certain operating conditions, and how priority profiles can be used to dynamically guide high-level strategies. Our dynamic system shows greatly improved results over a version equipped with static knowledge, and a version that only partially exploits the space of learned strategies. However, our work raises questions about how a system must know about its own design in order to best exploit its own competences.

Downloads

Published

2021-06-30

How to Cite

Young, J., & Hawes, N. (2021). Evolutionary Learning of Goal Priorities in a Real-Time Strategy Game. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 8(1), 87-92. https://doi.org/10.1609/aiide.v8i1.12503