Natural Language Inference in Context - Investigating Contextual Reasoning over Long Texts

Authors

  • Hanmeng Liu Zhejiang University
  • Leyang Cui Zhejiang University
  • Jian Liu Fudan University
  • Yue Zhang Westlake University

Keywords:

Lexical & Frame Semantics, Semantic Parsing

Abstract

Natural language inference (NLI) is a fundamental NLP task, investigating the entailment relationship between two texts.
 Popular NLI datasets present the task at sentence-level. While adequate for testing semantic representations, they fall short for testing contextual reasoning over long texts, which is a natural part of the human inference process. We introduce ConTRoL, a new dataset for ConTextual Reasoning over Long texts. Consisting of 8,325 expert-designed "context-hypothesis" pairs with gold labels, ConTRoL is a passage-level NLI dataset with a focus on complex contextual reasoning types such as logical reasoning. It is derived from competitive selection and recruitment test (verbal reasoning test) for police recruitment, with expert level quality. Compared with previous NLI benchmarks, the materials in ConTRoL are much more challenging, involving a range of reasoning types. Empirical results show that state-of-the-art language models perform by far worse than educated humans. Our dataset can also serve as a testing-set for downstream tasks like checking the factual correctness of summaries.

Downloads

Published

2021-05-18

How to Cite

Liu, H., Cui, L., Liu, J., & Zhang, Y. (2021). Natural Language Inference in Context - Investigating Contextual Reasoning over Long Texts. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13388-13396. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17580

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II