Rethinking Interpretation: Input-Agnostic Saliency Mapping of Deep Visual Classifiers

Authors

  • Naveed Akhtar The University of Western Australia
  • Mohammad Amir Asim Khan Jalwana The University of Western Australia

DOI:

https://doi.org/10.1609/aaai.v37i1.25089

Keywords:

CV: Interpretability and Transparency, CV: Learning & Optimization for CV, CV: Other Foundations of Computer Vision

Abstract

Saliency methods provide post-hoc model interpretation by attributing input features to the model outputs. Current methods mainly achieve this using a single input sample, thereby failing to answer input-independent inquiries about the model. We also show that input-specific saliency mapping is intrinsically susceptible to misleading feature attribution. Current attempts to use `general' input features for model interpretation assume access to a dataset containing those features, which biases the interpretation. Addressing the gap, we introduce a new perspective of input-agnostic saliency mapping that computationally estimates the high-level features attributed by the model to its outputs. These features are geometrically correlated, and are computed by accumulating model's gradient information with respect to an unrestricted data distribution. To compute these features, we nudge independent data points over the model loss surface towards the local minima associated by a human-understandable concept, e.g., class label for classifiers. With a systematic projection, scaling and refinement process, this information is transformed into an interpretable visualization without compromising its model-fidelity. The visualization serves as a stand-alone qualitative interpretation. With an extensive evaluation, we not only demonstrate successful visualizations for a variety of concepts for large-scale models, but also showcase an interesting utility of this new form of saliency mapping by identifying backdoor signatures in compromised classifiers.

Downloads

Published

2023-06-26

How to Cite

Akhtar, N., & Jalwana, M. A. A. K. (2023). Rethinking Interpretation: Input-Agnostic Saliency Mapping of Deep Visual Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 178-186. https://doi.org/10.1609/aaai.v37i1.25089

Issue

Section

AAAI Technical Track on Computer Vision I