Inferential Knowledge-Enhanced Integrated Reasoning for Video Question Answering
DOI:
https://doi.org/10.1609/aaai.v37i11.26570Keywords:
SNLP: Question AnsweringAbstract
Recently, video question answering has attracted growing attention. It involves answering a question based on a fine-grained understanding of video multi-modal information. Most existing methods have successfully explored the deep understanding of visual modality. We argue that a deep understanding of linguistic modality is also essential for answer reasoning, especially for videos that contain character dialogues. To this end, we propose an Inferential Knowledge-Enhanced Integrated Reasoning method. Our method consists of two main components: 1) an Inferential Knowledge Reasoner to generate inferential knowledge for linguistic modality inputs that reveals deeper semantics, including the implicit causes, effects, mental states, etc. 2) an Integrated Reasoning Mechanism to enhance video content understanding and answer reasoning by leveraging the generated inferential knowledge. Experimental results show that our method achieves significant improvement on two mainstream datasets. The ablation study further demonstrates the effectiveness of each component of our approach.Downloads
Published
2023-06-26
How to Cite
Mao, J., Jiang, W., Liu, H., Wang, X., & Lyu, Y. (2023). Inferential Knowledge-Enhanced Integrated Reasoning for Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13380-13388. https://doi.org/10.1609/aaai.v37i11.26570
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing