Evolutionary Learning of Goal Priorities in a Real-Time Strategy Game
DOI:
https://doi.org/10.1609/aiide.v8i1.12503Keywords:
motivation, learning, starcraft, goal generationAbstract
We present a drive-based agent capable of playing the real-time strategy computer game Starcraft. Success at this task requires the ability to engage in autonomous, goal-directed behaviour, as well as techniques to manage the problem of potential goal conflicts. To address this, we show how a case-injected genetic algorithm can be used to learn goal priority profiles for use in goal management. This is achieved by learning how goals might be re-prioritised under certain operating conditions, and how priority profiles can be used to dynamically guide high-level strategies. Our dynamic system shows greatly improved results over a version equipped with static knowledge, and a version that only partially exploits the space of learned strategies. However, our work raises questions about how a system must know about its own design in order to best exploit its own competences.