Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks


  • Anna Rogers University of Massachusetts Lowell
  • Olga Kovaleva University of Massachusetts Lowell
  • Matthew Downey University of Massachusetts Lowell
  • Anna Rumshisky University of Massachusetts Lowell



The recent explosion in question answering research produced a wealth of both factoid reading comprehension (RC) and commonsense reasoning datasets. Combining them presents a different kind of task: deciding not simply whether information is present in the text, but also whether a confident guess could be made for the missing information. We present QuAIL, the first RC dataset to combine text-based, world knowledge and unanswerable questions, and to provide question type annotation that would enable diagnostics of the reasoning strategies by a given QA system. QuAIL contains 15K multi-choice questions for 800 texts in 4 domains. Crucially, it offers both general and text-specific questions, unlikely to be found in pretraining data. We show that QuAIL poses substantial challenges to the current state-of-the-art systems, with a 30% drop in accuracy compared to the most similar existing dataset.




How to Cite

Rogers, A., Kovaleva, O., Downey, M., & Rumshisky, A. (2020). Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8722-8731.



AAAI Technical Track: Natural Language Processing