Knowledge-Enriched Visual Storytelling


  • Chao-Chun Hsu University of Colorado Boulder
  • Zi-Yuan Chen Academia Sinica
  • Chi-Yang Hsu Pennsylvania State University
  • Chih-Chia Li National Chiao Tung Universit
  • Tzu-Yuan Lin Academia Sinica
  • Ting-Hao Huang Pennsylvania State University
  • Lun-Wei Ku Academia Sinica



Stories are diverse and highly personalized, resulting in a large possible output space for story generation. Existing end-to-end approaches produce monotonous stories because they are limited to the vocabulary and knowledge in a single training dataset. This paper introduces KG-Story, a three-stage framework that allows the story generation model to take advantage of external Knowledge Graphs to produce interesting stories. KG-Story distills a set of representative words from the input prompts, enriches the word set by using external knowledge graphs, and finally generates stories based on the enriched word set. This distill-enrich-generate framework allows the use of external resources not only for the enrichment phase, but also for the distillation and generation phases. In this paper, we show the superiority of KG-Story for visual storytelling, where the input prompt is a sequence of five photos and the output is a short story. Per the human ranking evaluation, stories generated by KG-Story are on average ranked better than that of the state-of-the-art systems. Our code and output stories are available at




How to Cite

Hsu, C.-C., Chen, Z.-Y., Hsu, C.-Y., Li, C.-C., Lin, T.-Y., Huang, T.-H., & Ku, L.-W. (2020). Knowledge-Enriched Visual Storytelling. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7952-7960.



AAAI Technical Track: Natural Language Processing