Relation-Aware Language-Graph Transformer for Question Answering

Authors

  • Jinyoung Park Korea University
  • Hyeong Kyu Choi Korea University
  • Juyeon Ko Korea University
  • Hyeonjin Park NAVER
  • Ji-Hoon Kim NAVER, NAVER Cloud, NAVER AI Lab
  • Jisu Jeong NAVER NAVER Cloud NAVER AI Lab
  • Kyungmin Kim NAVER NAVER Cloud NAVER AI Lab
  • Hyunwoo Kim Korea University

DOI:

https://doi.org/10.1609/aaai.v37i11.26578

Keywords:

SNLP: Question Answering, KRR: Common-Sense Reasoning, ML: Graph-based Machine Learning

Abstract

Question Answering (QA) is a task that entails reasoning over natural language contexts, and many relevant works augment language models (LMs) with graph neural networks (GNNs) to encode the Knowledge Graph (KG) information. However, most existing GNN-based modules for QA do not take advantage of rich relational information of KGs and depend on limited information interaction between the LM and the KG. To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner. Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations. Then, our Relation-Aware Self-Attention module comprehensively integrates different modalities via the Cross-Modal Relative Position Bias, which guides information exchange between relevant entities of different modalities. We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE. On all the datasets, our method achieves state-of-the-art performance. Our code is available at http://github.com/mlvlab/QAT.

Downloads

Published

2023-06-26

How to Cite

Park, J., Choi, H. K., Ko, J., Park, H., Kim, J.-H., Jeong, J., Kim, K., & Kim, H. (2023). Relation-Aware Language-Graph Transformer for Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13457-13464. https://doi.org/10.1609/aaai.v37i11.26578

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing