Hybrid Autoregressive Inference for Scalable Multi-Hop Explanation Regeneration

Authors

  • Marco Valentino University of Manchester, United Kingdom Idiap Research Institute, Switzerland
  • Mokanarangan Thayaparan University of Manchester, United Kingdom Idiap Research Institute, Switzerland
  • Deborah Ferreira University of Manchester, United Kingdom
  • André Freitas University of Manchester, United Kingdom Idiap Research Institute, Switzerland

DOI:

https://doi.org/10.1609/aaai.v36i10.21392

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Regenerating natural language explanations in the scientific domain has been proposed as a benchmark to evaluate complex multi-hop and explainable inference. In this context, large language models can achieve state-of-the-art performance when employed as cross-encoder architectures and fine-tuned on human-annotated explanations. However, while much attention has been devoted to the quality of the explanations, the problem of performing inference efficiently is largely under studied. Cross-encoders, in fact, are intrinsically not scalable, possessing limited applicability to real-world scenarios that require inference on massive facts banks. To enable complex multi-hop reasoning at scale, this paper focuses on bi-encoder architectures, investigating the problem of scientific explanation regeneration at the intersection of dense and sparse models. Specifically, we present SCAR (for Scalable Autoregressive Inference), a hybrid framework that iteratively combines a Transformer-based bi-encoder with a sparse model of explanatory power, designed to leverage explicit inference patterns in the explanations. Our experiments demonstrate that the hybrid framework significantly outperforms previous sparse models, achieving performance comparable with that of state-of-the-art cross-encoders while being approx 50 times faster and scalable to corpora of millions of facts. Further analyses on semantic drift and multi-hop question answering reveal that the proposed hybridisation boosts the quality of the most challenging explanations, contributing to improved performance on downstream inference tasks.

Downloads

Published

2022-06-28

How to Cite

Valentino, M., Thayaparan, M., Ferreira, D., & Freitas, A. (2022). Hybrid Autoregressive Inference for Scalable Multi-Hop Explanation Regeneration. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11403-11411. https://doi.org/10.1609/aaai.v36i10.21392

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing