HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions

Authors

  • Shaobo Li Harbin Institute of Technology
  • Xiaoguang Li Huawei Noah's Ark Lab
  • Lifeng Shang Huawei Noah's Ark Lab
  • Xin Jiang Huawei Noah's Ark Lab
  • Qun Liu Huawei Noah's Ark Lab
  • Chengjie Sun Harbin Institute of Technology
  • Zhenzhou Ji Harbin Institute of Technology
  • Bingquan Liu Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i15.17568

Keywords:

Question Answering

Abstract

Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that HopRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process.

Downloads

Published

2021-05-18

How to Cite

Li, S., Li, X., Shang, L., Jiang, X., Liu, Q., Sun, C., Ji, Z., & Liu, B. (2021). HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13279-13287. https://doi.org/10.1609/aaai.v35i15.17568

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II