Predicting Generated Story Quality with Quantitative Measures

Authors

  • Christopher Purdy Georgia Institute of Technology
  • Xinyu Wang Georgia Institute of Technology
  • Larry He Georgia Institute of Technology
  • Mark Riedl Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aiide.v14i1.13021

Keywords:

artificial intelligence, narrative intelligence, machine learning, evaluation

Abstract

The ability of digital storytelling agents to evaluate their output is important for ensuring high-quality human-agent interactions. However, evaluating stories remains an open problem. Past evaluative techniques are either model-specific--- which measure features of the model but do not evaluate the generated stories ---or require direct human feedback, which is resource-intensive. We introduce a number of story features that correlate with human judgments of stories and present algorithms that can measure these features. We find this approach results in a proxy for human-subject studies for researchers evaluating story generation systems.

Downloads

Published

2018-09-25

How to Cite

Purdy, C., Wang, X., He, L., & Riedl, M. (2018). Predicting Generated Story Quality with Quantitative Measures. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 14(1), 95-101. https://doi.org/10.1609/aiide.v14i1.13021