Plug-and-Blend: A Framework for Plug-and-Play Controllable Story Generation with Sketches

Authors

  • Zhiyu Lin Georgia Institute of Technology
  • Mark O. Riedl Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aiide.v17i1.18891

Keywords:

Procedural Content Generation, Generative Language Models, Narrative

Abstract

Large pre-trained neural language models (LM) have very powerful text generation capabilities. However, in practice, they are hard to control for creative purposes. We describe a Plug-and-Play controllable language generation framework, Plug-and-Blend, that allows a human user to input multiple control codes (topics). In the context of automated story generation, this allows a human user loose or fine-grained control of the topics and transitions between them that will appear in the generated story, and can even allow for overlapping, blended topics. Automated evaluations show our framework, working with different generative LMs, controls the generation towards given continuous-weighted control codes while keeping the generated sentences fluent, demonstrating strong blending capability. A human participant evaluation shows that the generated stories are observably transitioning between two topics.

Downloads

Published

2021-10-04

How to Cite

Lin, Z., & Riedl, M. O. (2021). Plug-and-Blend: A Framework for Plug-and-Play Controllable Story Generation with Sketches. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 17(1), 58-65. https://doi.org/10.1609/aiide.v17i1.18891