Zero-Shot Commonsense Question Answering with Cloze Translation and Consistency Optimization


  • Zi-Yi Dou University of California, Los Angeles
  • Nanyun Peng University of California, Los Angeles



Speech & Natural Language Processing (SNLP)


Commonsense question answering (CQA) aims to test if models can answer questions regarding commonsense knowledge that everyone knows. Prior works that incorporate external knowledge bases have shown promising results, but knowledge bases are expensive to construct and are often limited to a fixed set of relations. In this paper, we instead focus on better utilizing the implicit knowledge stored in pre-trained language models. While researchers have found that the knowledge embedded in pre-trained language models can be extracted by having them fill in the blanks of carefully designed prompts for relation extraction and text classification, it remains unclear if we can adopt this paradigm in CQA where the inputs and outputs take much more flexible forms. To this end, we investigate four translation methods that can translate natural questions into cloze-style sentences to better solicit commonsense knowledge from language models, including a syntactic-based model, an unsupervised neural model, and two supervised neural models. In addition, to combine the different translation methods, we propose to encourage consistency among model predictions on different translated questions with unlabeled data. We demonstrate the effectiveness of our methods on three CQA datasets in zero-shot settings. We show that our methods are complementary to a knowledge base improved model, and combining them can lead to state-of-the-art zero-shot performance. Analyses also reveal distinct characteristics of the different cloze translation methods and provide insights on why combining them can lead to great improvements. Code/dataset is available at




How to Cite

Dou, Z.-Y., & Peng, N. (2022). Zero-Shot Commonsense Question Answering with Cloze Translation and Consistency Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10572-10580.



AAAI Technical Track on Speech and Natural Language Processing