Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors

Authors

  • Ruihan Zhang School of Computing and Information Systems, The University of Melbourne
  • Prashan Madumal School of Computing and Information Systems, The University of Melbourne
  • Tim Miller School of Computing and Information Systems, The University of Melbourne
  • Krista A. Ehinger School of Computing and Information Systems, The University of Melbourne
  • Benjamin I. P. Rubinstein School of Computing and Information Systems, The University of Melbourne

DOI:

https://doi.org/10.1609/aaai.v35i13.17389

Keywords:

Accountability, Interpretability & Explainability

Abstract

Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form. This deficiency remains a key challenge when applying CNNs in important domains. Recent work on explanations through feature importance of approximate linear models has moved from input-level features (pixels or segments) to features from mid-layer feature maps in the form of concept activation vectors (CAVs). CAVs contain concept-level information and could be learned via clustering. In this work, we rethink the ACE algorithm of Ghorbani et~al., proposing an alternative invertible concept-based explanation (ICE) framework to overcome its shortcomings. Based on the requirements of fidelity (approximate models to target models) and interpretability (being meaningful to people), we design measurements and evaluate a range of matrix factorization methods with our framework. We find that non-negative concept activation vectors (NCAVs) from non-negative matrix factorization provide superior performance in interpretability and fidelity based on computational and human subject experiments. Our framework provides both local and global concept-level explanations for pre-trained CNN models.

Downloads

Published

2021-05-18

How to Cite

Zhang, R., Madumal, P., Miller, T., Ehinger, K. A., & Rubinstein, B. I. P. (2021). Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11682-11690. https://doi.org/10.1609/aaai.v35i13.17389

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI