Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations

Authors

  • Riccardo Guidotti ISTI-CNR, Pisa
  • Anna Monreale University of Pisa
  • Stan Matwin Dalhousie University and Polish Academy of Sciences
  • Dino Pedreschi University of Pisa

DOI:

https://doi.org/10.1609/aaai.v34i09.7116

Abstract

We present an approach to explain the decisions of black box image classifiers through synthetic exemplar and counter-exemplar learnt in the latent feature space. Our explanation method exploits the latent representations learned through an adversarial autoencoder for generating a synthetic neighborhood of the image for which an explanation is required. A decision tree is trained on a set of images represented in the latent space, and its decision rules are used to generate exemplar images showing how the original image can be modified to stay within its class. Counterfactual rules are used to generate counter-exemplars showing how the original image can “morph” into another class. The explanation also comprehends a saliency map highlighting the areas that contribute to its classification, and areas that push it into another class. A wide and deep experimental evaluation proves that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability, besides providing the most useful and interpretable explanations.

Downloads

Published

2020-04-03

How to Cite

Guidotti, R., Monreale, A., Matwin, S., & Pedreschi, D. (2020). Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13665-13668. https://doi.org/10.1609/aaai.v34i09.7116