Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension

Authors

  • Xiaorui Zhou Beijing Institute of Technology
  • Senlin Luo Beijing Institute of Technology
  • Yunfang Wu Peking University

DOI:

https://doi.org/10.1609/aaai.v34i05.6522

Abstract

In reading comprehension, generating sentence-level distractors is a significant task, which requires a deep understanding of the article and question. The traditional entity-centered methods can only generate word-level or phrase-level distractors. Although recently proposed neural-based methods like sequence-to-sequence (Seq2Seq) model show great potential in generating creative text, the previous neural methods for distractor generation ignore two important aspects. First, they didn't model the interactions between the article and question, making the generated distractors tend to be too general or not relevant to question context. Second, they didn't emphasize the relationship between the distractor and article, making the generated distractors not semantically relevant to the article and thus fail to form a set of meaningful options. To solve the first problem, we propose a co-attention enhanced hierarchical architecture to better capture the interactions between the article and question, thus guide the decoder to generate more coherent distractors. To alleviate the second problem, we add an additional semantic similarity loss to push the generated distractors more relevant to the article. Experimental results show that our model outperforms several strong baselines on automatic metrics, achieving state-of-the-art performance. Further human evaluation indicates that our generated distractors are more coherent and more educative compared with those distractors generated by baselines.

Downloads

Published

2020-04-03

How to Cite

Zhou, X., Luo, S., & Wu, Y. (2020). Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9725-9732. https://doi.org/10.1609/aaai.v34i05.6522

Issue

Section

AAAI Technical Track: Natural Language Processing