Automatic Learning of Combat Models for RTS Games

Authors

  • Alberto Uriarte Drexel University
  • Santiago Ontañón Drexel University

DOI:

https://doi.org/10.1609/aiide.v11i1.12793

Keywords:

RTS, real-time strategy games, game-tree search, StarCraft, bot, AI, BWAPI

Abstract

Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games such forward model is not readily available. In this paper we address the problem of automatically learning forward models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches to learn such combat model from replay data to models generated by hand. We use StarCraft, a Real-Time Strategy (RTS) game, as our application domain. Specifically, we use a large collection of already collected replays, and focus on learning a combat model for tactical combats.

Downloads

Published

2021-06-24

How to Cite

Uriarte, A., & Ontañón, S. (2021). Automatic Learning of Combat Models for RTS Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 11(1), 212-218. https://doi.org/10.1609/aiide.v11i1.12793