Learning Probabilistic Behavior Models in Real-Time Strategy Games


  • Ethan Dereszynski Oregon State University
  • Jesse Hostetler Oregon State University
  • Alan Fern Oregon State University
  • Tom Dietterich Oregon State University
  • Thao-Trang Hoang Oregon State University
  • Mark Udarbe Oregon State University




Behavior Modeling, Learning, Player Modeling


We study the problem of learning probabilistic models of high-level strategic behavior in the real-time strategy (RTS) game StarCraft. The models are automatically learned from sets of game logs and aim to capture the common strategic states and decision points that arise in those games. Unlike most work on behavior/strategy learning and prediction in RTS games, our data-centric approach is not biased by or limited to any set of preconceived strategic concepts. Further, since our behavior model is based on the well-developed and generic paradigm of hidden Markov models, it supports a variety of uses for the design of AI players and human assistants. For example, the learned models can be used to make probabilistic predictions of a player's future actions based on observations, to simulate possible future trajectories of a player, or to identify uncharacteristic or novel strategies in a game database. In addition, the learned qualitative structure of the model can be analyzed by humans in order to categorize common strategic elements. We demonstrate our approach by learning models from 331 expert-level games and provide both a qualitative and quantitative assessment of the learned model's utility.




How to Cite

Dereszynski, E., Hostetler, J., Fern, A., Dietterich, T., Hoang, T.-T., & Udarbe, M. (2011). Learning Probabilistic Behavior Models in Real-Time Strategy Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 7(1), 20-25. https://doi.org/10.1609/aiide.v7i1.12433