Robust Player Plan Recognition in Digital Games with Multi-Task Multi-Label Learning


  • Alex Goslen North Carolina State University
  • Dan Carpenter North Carolina State University
  • Jonathan Rowe North Carolina State University
  • Roger Azevedo University of Central Florida
  • James Lester North Carolina State University



Plan Recognition, Multi-task Learning, Open-World Games


Plan recognition is a key component of player modeling. Player plan recognition focuses on modeling how and when players select goals and formulate action sequences to achieve their goals during gameplay. By occasionally asking players to describe their plans, it is possible to devise robust plan recognition models that jointly reason about player goals and action sequences in coordination with player input. In this work, we present a player plan recognition framework that leverages data from player interactions with a planning support tool embedded in an educational game for middle school science education, CRYSTAL ISLAND. Players are prompted to use the planning tool to describe their goals and planned actions in CRYSTAL ISLAND. We use this data to devise data-driven player plan recognition models using multi-label multi-task learning. Specifically, we compare single-task and multi-task learning approaches for both goal prediction and action sequence prediction. Results indicate that multi-task learning yields significant benefits for action sequence prediction. Additionally, we find that incorporating automated detectors of plan completion in plan recognition models improves predictive performance in both tasks.




How to Cite

Goslen, A., Carpenter, D., Rowe, J., Azevedo, R., & Lester, J. (2022). Robust Player Plan Recognition in Digital Games with Multi-Task Multi-Label Learning. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 18(1), 105-112.