Benchmarking XAI Explanations with Human-Aligned Evaluations

Authors

  • Rémi Kazmierczak ENSTA
  • Steve Azzolin University of Trento
  • Eloïse Berthier ENSTA
  • Anna Hedström TU Berlin
  • Patricia Delhomme Université Gustave Eiffel
  • David Filliat ENSTA
  • Nicolas Bousquet SINCLAIR Laboratory
  • Goran Frehse ENSTA
  • Massimiliano Mancini University of Trento
  • Baptiste Caramiaux Sorbonne Université
  • Andrea Passerini University of Trento
  • Gianni Franchi ENSTA

DOI:

https://doi.org/10.1609/aaai.v40i44.41082

Abstract

We introduce PASTA (Perceptual Assessment System for explanaTion of Artificial Intelligence), a novel human-centric framework for evaluating eXplainable AI (XAI) techniques in computer vision. Our first contribution is the creation of the PASTA-dataset, the first large-scale benchmark that spans a diverse set of models and both saliency-based and concept-based explanation methods. This dataset enables robust, comparative analysis of XAI techniques based on human judgment. Our second contribution is an automated, data-driven benchmark that predicts human preferences using the PASTA-dataset. This scoring called PASTA-score method offers scalable, reliable, and consistent evaluation aligned with human perception. Additionally, our benchmark allows for comparisons between explanations across different modalities, an aspect previously unaddressed. We then propose to apply our scoring method to probe the interpretability of existing models and to build more human interpretable XAI methods.

Published

2026-03-14

How to Cite

Kazmierczak, R., Azzolin, S., Berthier, E., Hedström, A., Delhomme, P., Filliat, D., … Franchi, G. (2026). Benchmarking XAI Explanations with Human-Aligned Evaluations. Proceedings of the AAAI Conference on Artificial Intelligence, 40(44), 37491–37500. https://doi.org/10.1609/aaai.v40i44.41082

Issue

Section

AAAI Special Track on AI Alignment