Supervising Model Attention with Human Explanations for Robust Natural Language Inference

Authors

  • Joe Stacey Imperial College London
  • Yonatan Belinkov Technion
  • Marek Rei Imperial College London

DOI:

https://doi.org/10.1609/aaai.v36i10.21386

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Natural Language Inference (NLI) models are known to learn from biases and artefacts within their training data, impacting how well they generalise to other unseen datasets. Existing de-biasing approaches focus on preventing the models from learning these biases, which can result in restrictive models and lower performance. We instead investigate teaching the model how a human would approach the NLI task, in order to learn features that will generalise better to previously unseen examples. Using natural language explanations, we supervise the model’s attention weights to encourage more attention to be paid to the words present in the explanations, significantly improving model performance. Our experiments show that the in-distribution improvements of this method are also accompanied by out-of-distribution improvements, with the supervised models learning from features that generalise better to other NLI datasets. Analysis of the model indicates that human explanations encourage increased attention on the important words, with more attention paid to words in the premise and less attention paid to punctuation and stopwords.

Downloads

Published

2022-06-28

How to Cite

Stacey, J., Belinkov, Y., & Rei, M. (2022). Supervising Model Attention with Human Explanations for Robust Natural Language Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11349-11357. https://doi.org/10.1609/aaai.v36i10.21386

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing