Learning Directional Sentence-Pair Embedding for Natural Language Reasoning (Student Abstract)


  • Yuchen Jiang Zhejiang University
  • Zhenxin Xiao Zhejiang University
  • Kai-Wei Chang University of California, Los Angeles




Enabling the models with the ability of reasoning and inference over text is one of the core missions of natural language understanding. Despite deep learning models have shown strong performance on various cross-sentence inference benchmarks, recent work has shown that they are leveraging spurious statistical cues rather than capturing deeper implied relations between pairs of sentences. In this paper, we show that the state-of-the-art language encoding models are especially bad at modeling directional relations between sentences by proposing a new evaluation task: Cause-and-Effect relation prediction task. Back by our curated Cause-and-Effect Relation dataset (Cℰℛ), we also demonstrate that a mutual attention mechanism can guide the model to focus on capturing directional relations between sentences when added to existing transformer-based models. Experiment results show that the proposed approach improves the performance on downstream applications, such as the abductive reasoning task.




How to Cite

Jiang, Y., Xiao, Z., & Chang, K.-W. (2020). Learning Directional Sentence-Pair Embedding for Natural Language Reasoning (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13825-13826. https://doi.org/10.1609/aaai.v34i10.7184



Student Abstract Track