Multimodal Goal Recognition in Open-World Digital Games

Authors

  • Wookhee Min North Carolina State University
  • Bradford Mott North Carolina State University
  • Jonathan Rowe North Carolina State University
  • Robert Taylor North Carolina State University
  • Eric Wiebe North Carolina State University
  • Kristy Boyer University of Florida
  • James Lester North Carolina State University

DOI:

https://doi.org/10.1609/aiide.v13i1.12939

Keywords:

Goal Recognition, Intent Recognition, Player Modeling, Digital Game, Multimodal Interface

Abstract

Recent years have seen a growing interest in player modeling to create player-adaptive digital games. As a core player-modeling task, goal recognition aims to recognize players’ latent, high-level intentions in a non-invasive fashion to deliver goal-driven, tailored game experiences. This paper reports on an investigation of multimodal data streams that provide rich evidence about players’ goals. Two data streams, game event traces and player gaze traces, are utilized to devise goal recognition models from a corpus collected from an open-world serious game for science education. Empirical evaluations of 140 players’ trace data suggest that multimodal LSTM-based goal recognition models outperform competitive baselines, including unimodal LSTMs as well as multimodal and unimodal CRFs, with respect to predictive accuracy and early prediction. The results demonstrate that player gaze traces have the potential to significantly enhance goal recognition models’ performance.

Downloads

Published

2021-06-25

How to Cite

Min, W., Mott, B., Rowe, J., Taylor, R., Wiebe, E., Boyer, K., & Lester, J. (2021). Multimodal Goal Recognition in Open-World Digital Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 13(1), 80-86. https://doi.org/10.1609/aiide.v13i1.12939