Optimizing Player Experience in Interactive Narrative Planning: A Modular Reinforcement Learning Approach

Authors

  • Jonathan Rowe North Carolina State University
  • Bradford Mott North Carolina State University
  • James Lester North Carolina State University

DOI:

https://doi.org/10.1609/aiide.v10i1.12733

Abstract

Recent years have witnessed growing interest in data-driven approaches to interactive narrative planning and drama management. Reinforcement learning techniques show particular promise because they can automatically induce and refine models for tailoring game events by optimizing reward functions that explicitly encode interactive narrative experiences’ quality. Due to the inherently subjective nature of interactive narrative experience, designing effective reward functions is challenging. In this paper, we investigate the impacts of alternate formulations of reward in a reinforcement learning-based interactive narrative planner for the Crystal Island game environment. We formalize interactive narrative planning as a modular reinforcement-learning (MRL) problem. By decomposing interactive narrative planning into multiple independent sub-problems, MRL enables efficient induction of interactive narrative policies directly from a corpus of human players’ experience data. Empirical analyses suggest that interactive narrative policies induced with MRL are likely to yield better player outcomes than heuristic or baseline policies. Furthermore, we observe that MRL-based interactive narrative planners are robust to alternate reward discount parameterizations.

Downloads

Published

2021-06-29

How to Cite

Rowe, J., Mott, B., & Lester, J. (2021). Optimizing Player Experience in Interactive Narrative Planning: A Modular Reinforcement Learning Approach. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 10(1), 160-166. https://doi.org/10.1609/aiide.v10i1.12733