TY - JOUR AU - Andrus, Berkeley R AU - Nasiri, Yeganeh AU - Cui, Shilong AU - Cullen, Benjamin AU - Fulda, Nancy PY - 2022/06/28 Y2 - 2024/03/29 TI - Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 10 SE - AAAI Technical Track on Speech and Natural Language Processing DO - 10.1609/aaai.v36i10.21286 UR - https://ojs.aaai.org/index.php/AAAI/article/view/21286 SP - 10436-10444 AB - Large transformer-based language models have achieved incredible success at various tasks which require narrative comprehension, including story completion, answering questions about stories, and generating stories ex nihilo. However, due to the limitations of finite context windows, these language models struggle to produce or understand stories longer than several thousand tokens. In order to mitigate the document length limitations that come with finite context windows, we introduce a novel architecture that augments story processing with an external dynamic knowledge graph. In contrast to static commonsense knowledge graphs which hold information about the real world, these dynamic knowledge graphs reflect facts extracted from the story being processed. Our architecture uses these knowledge graphs to create information-rich prompts which better facilitate story comprehension than prompts composed only of story text. We apply our architecture to the tasks of question answering and story completion. To complement this line of research, we introduce two long-form question answering tasks, LF-SQuAD and LF-QUOREF, in which the document length exceeds the size of the language model's context window, and introduce a story completion evaluation method that bypasses the stochastic nature of language model generation. We demonstrate broad improvement over typical prompt formulation methods for both question answering and story completion using GPT-2, GPT-3 and XLNet. ER -