Graph Reasoning Transformers for Knowledge-Aware Question Answering
DOI:
https://doi.org/10.1609/aaai.v38i17.29938Keywords:
NLP: Question Answering, NLP: ApplicationsAbstract
Augmenting Language Models (LMs) with structured knowledge graphs (KGs) aims to leverage structured world knowledge to enhance the capability of LMs to complete knowledge-intensive tasks. However, existing methods are unable to effectively utilize the structured knowledge in a KG due to their inability to capture the rich relational semantics of knowledge triplets. Moreover, the modality gap between natural language text and KGs has become a challenging obstacle when aligning and fusing cross-modal information. To address these challenges, we propose a novel knowledge-augmented question answering (QA) model, namely, Graph Reasoning Transformers (GRT). Different from conventional node-level methods, the GRT serves knowledge triplets as atomic knowledge and utilize a triplet-level graph encoder to capture triplet-level graph features. Furthermore, to alleviate the negative effect of the modality gap on joint reasoning, we propose a representation alignment pretraining to align the cross-modal representations and introduce a cross-modal information fusion module with attention bias to enable fine-grained information fusion. Extensive experiments conducted on three knowledge-intensive QA benchmarks show that the GRT outperforms the state-of-the-art KG-augmented QA systems, demonstrating the effectiveness and adaptation of our proposed model.Downloads
Published
2024-03-24
How to Cite
Zhao, R., Zhao, F., Hu, L., & Xu, G. (2024). Graph Reasoning Transformers for Knowledge-Aware Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19652-19660. https://doi.org/10.1609/aaai.v38i17.29938
Issue
Section
AAAI Technical Track on Natural Language Processing II