Evaluating Commonsense in Pre-Trained Language Models

Authors

  • Xuhui Zhou University of Washington
  • Yue Zhang Westlake University
  • Leyang Cui Westlake University
  • Dandan Huang Westlake University

DOI:

https://doi.org/10.1609/aaai.v34i05.6523

Abstract

Contextualized representations trained over large raw text data have given remarkable improvements for NLP tasks including question answering and reading comprehension. There have been works showing that syntactic, semantic and word sense knowledge are contained in such representations, which explains why they benefit such tasks. However, relatively little work has been done investigating commonsense knowledge contained in contextualized representations, which is crucial for human question answering and reading comprehension. We study the commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven challenging benchmarks, finding that language modeling and its variants are effective objectives for promoting models' commonsense ability while bi-directional context and larger training set are bonuses. We additionally find that current models do poorly on tasks require more necessary inference steps. Finally, we test the robustness of models by making dual test cases, which are correlated so that the correct prediction of one sample should lead to correct prediction of the other. Interestingly, the models show confusion on these test cases, which suggests that they learn commonsense at the surface rather than the deep level. We release a test set, named CATs publicly, for future research.

Downloads

Published

2020-04-03

How to Cite

Zhou, X., Zhang, Y., Cui, L., & Huang, D. (2020). Evaluating Commonsense in Pre-Trained Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9733-9740. https://doi.org/10.1609/aaai.v34i05.6523

Issue

Section

AAAI Technical Track: Natural Language Processing