Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution

Authors

  • Carlos Eiras-Franco Universidade da Coruña CITIC
  • Anna Hedström UMI Lab, Leibniz Institute of Agricultural Engineering and Bioeconomy e.V. (ATB) BIFOLD – Berlin Institute for the Foundations of Learning and Data Department of Computer Science, University of Potsdam
  • Marina M.-C. Höhne Data Science Department, Leibniz Institute of Agricultural Engineering and Bioeconomy e.V. (ATB) Department of Computer Science, University of Potsdam

DOI:

https://doi.org/10.1609/aaai.v39i26.34935

Abstract

Obtaining high-quality explanations of a model's output enables developers to identify and correct biases, align the system's behavior with human values, and ensure ethical compliance. Explainable Artificial Intelligence (XAI) practitioners rely on specific measures to gauge the quality of such explanations. These measures assess key attributes, such as how closely an explanation aligns with a model's decision process (faithfulness), how accurately it pinpoints the relevant input features (localization), and its consistency across different cases (robustness). Despite providing valuable information, these measures do not fully address a critical practitioner's concern: how does the quality of a given explanation compare to other potential explanations? Traditionally, the quality of an explanation has been assessed by comparing it to a randomly generated counterpart. This paper introduces an alternative: the Quality Gap Estimate (QGE). The QGE method offers a direct comparison to what can be viewed as the `inverse' explanation, one that conceptually represents the antithesis of the original explanation. Our extensive testing across multiple model architectures, datasets, and established quality metrics demonstrates that the QGE method is superior to the traditional approach. Furthermore, we show that QGE enhances the statistical reliability of these quality assessments. This advance represents a significant step toward a more insightful evaluation of explanations that enables a more effective inspection of a model's behavior.

Downloads

Published

2025-04-11

How to Cite

Eiras-Franco, C., Hedström, A., & Höhne, M. M.-C. (2025). Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution. Proceedings of the AAAI Conference on Artificial Intelligence, 39(26), 27258–27267. https://doi.org/10.1609/aaai.v39i26.34935

Issue

Section

AAAI Technical Track on AI Alignment