Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation

Authors

  • Ning Bian Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Xianpei Han Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
  • Bo Chen Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
  • Le Sun Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v35i14.17490

Keywords:

Question Answering

Abstract

A fundamental ability of humans is to utilize commonsense knowledge in language understanding and question answering. In recent years, many knowledge-enhanced Commonsense Question Answering (CQA) approaches have been proposed. However, it remains unclear: (1) How far can we get by exploiting external knowledge for CQA? (2) How much potential of knowledge has been exploited in current CQA models? (3) Which are the most promising directions for future CQA? To answer these questions, we benchmark knowledge-enhanced CQA by conducting extensive experiments on multiple standard CQA datasets using a simple and effective knowledge-to-text transformation framework. Experiments show that: (1) Our knowledge-to-text framework is effective and achieves state-of-the-art performance on CommonsenseQA dataset, providing a simple and strong knowledge-enhanced baseline for CQA; (2) The potential of knowledge is still far from being fully exploited in CQA — there is a significant performance gap from current models to our models with golden knowledge; and (3) Context-sensitive knowledge selection, heterogeneous knowledge exploitation, and commonsense-rich language models are promising CQA directions.

Downloads

Published

2021-05-18

How to Cite

Bian, N., Han, X., Chen, B., & Sun, L. (2021). Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12574-12582. https://doi.org/10.1609/aaai.v35i14.17490

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I