FiTs: Fine-Grained Two-Stage Training for Knowledge-Aware Question Answering


  • Qichen Ye Peking University
  • Bowen Cao Peking University
  • Nuo Chen Hong Kong University of Science and Technology (Guangzhou) Hong Kong University of Science and Technology
  • Weiyuan Xu Peking University
  • Yuexian Zou Peking University Peng Cheng Laboratory



SNLP: Question Answering


Knowledge-aware question answering (KAQA) requires the model to answer questions over a knowledge base, which is essential for both open-domain QA and domain-specific QA, especially when language models alone cannot provide all the knowledge needed. Despite the promising result of recent KAQA systems which tend to integrate linguistic knowledge from pre-trained language models (PLM) and factual knowledge from knowledge graphs (KG) to answer complex questions, a bottleneck exists in effectively fusing the representations from PLMs and KGs because of (i) the semantic and distributional gaps between them, and (ii) the difficulties in joint reasoning over the provided knowledge from both modalities. To address the above two problems, we propose a Fine-grained Two-stage training framework (FiTs) to boost the KAQA system performance: The first stage aims at aligning representations from the PLM and the KG, thus bridging the modality gaps between them, named knowledge adaptive post-training. The second stage, called knowledge-aware fine-tuning, aims to improve the model's joint reasoning ability based on the aligned representations. In detail, we fine-tune the post-trained model via two auxiliary self-supervised tasks in addition to the QA supervision. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on three benchmarks in the commonsense reasoning (i.e., CommonsenseQA, OpenbookQA) and medical question answering (i.e., MedQA-USMILE) domains.




How to Cite

Ye, Q., Cao, B., Chen, N., Xu, W., & Zou, Y. (2023). FiTs: Fine-Grained Two-Stage Training for Knowledge-Aware Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13914-13922.



AAAI Technical Track on Speech & Natural Language Processing