Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI

Authors

  • Suzanna Sia Johns Hopkins University
  • Anton Belyy Johns Hopkins University
  • Amjad Almahairi Meta AI
  • Madian Khabsa Meta AI
  • Luke Zettlemoyer Meta AI
  • Lambert Mathias Meta AI

DOI:

https://doi.org/10.1609/aaai.v37i8.26174

Keywords:

ML: Transparent, Interpretable, Explainable ML, CV: Language and Vision

Abstract

Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors. In this work, which focuses on the NLI task, we introduce the methodology of Faithfulness-through-Counterfactuals, which first generates a counterfactual hypothesis based on the logical predicates expressed in the explanation, and then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic (i.e. if the new formula is \textit{logically satisfiable}). In contrast to existing approaches, this does not require any explanations for training a separate verification model. We first validate the efficacy of automatic counterfactual hypothesis generation, leveraging on the few-shot priming paradigm. Next, we show that our proposed metric distinguishes between human-model agreement and disagreement on new counterfactual input. In addition, we conduct a sensitivity analysis to validate that our metric is sensitive to unfaithful explanations.

Downloads

Published

2023-06-26

How to Cite

Sia, S., Belyy, A., Almahairi, A., Khabsa, M., Zettlemoyer, L., & Mathias, L. (2023). Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9837-9845. https://doi.org/10.1609/aaai.v37i8.26174

Issue

Section

AAAI Technical Track on Machine Learning III