Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs
Keywords:Speech & Natural Language Processing (SNLP), Machine Learning (ML)
AbstractLarge transformer-based language models have achieved incredible success at various tasks which require narrative comprehension, including story completion, answering questions about stories, and generating stories ex nihilo. However, due to the limitations of finite context windows, these language models struggle to produce or understand stories longer than several thousand tokens. In order to mitigate the document length limitations that come with finite context windows, we introduce a novel architecture that augments story processing with an external dynamic knowledge graph. In contrast to static commonsense knowledge graphs which hold information about the real world, these dynamic knowledge graphs reflect facts extracted from the story being processed. Our architecture uses these knowledge graphs to create information-rich prompts which better facilitate story comprehension than prompts composed only of story text. We apply our architecture to the tasks of question answering and story completion. To complement this line of research, we introduce two long-form question answering tasks, LF-SQuAD and LF-QUOREF, in which the document length exceeds the size of the language model's context window, and introduce a story completion evaluation method that bypasses the stochastic nature of language model generation. We demonstrate broad improvement over typical prompt formulation methods for both question answering and story completion using GPT-2, GPT-3 and XLNet.
How to Cite
Andrus, B. R., Nasiri, Y., Cui, S., Cullen, B., & Fulda, N. (2022). Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10436-10444. https://doi.org/10.1609/aaai.v36i10.21286
AAAI Technical Track on Speech and Natural Language Processing