Towards More Robust Interpretation via Local Gradient Alignment

Authors

  • Sunghwan Joo Department of ECE, Sungkyunkwan University
  • SeokHyeon Jeong Department of ECE, Seoul National University
  • Juyeon Heo University of Cambridge
  • Adrian Weller University of Cambridge, The Alan Turing Institute
  • Taesup Moon Department of ECE, Seoul National University, ASRI/INMC/IPAI/AIIS, Seoul National University

DOI:

https://doi.org/10.1609/aaai.v37i7.25986

Keywords:

ML: Transparent, Interpretable, Explainable ML, ML: Adversarial Learning & Robustness, ML: Deep Neural Network Algorithms, PEAI: Safety, Robustness & Trustworthiness

Abstract

Neural network interpretation methods, particularly feature attribution methods, are known to be fragile with respect to adversarial input perturbations. To address this, several methods for enhancing the local smoothness of the gradient while training have been proposed for attaining robust feature attributions. However, the lack of considering the normalization of the attributions, which is essential in their visualizations, has been an obstacle to understanding and improving the robustness of feature attribution methods. In this paper, we provide new insights by taking such normalization into account. First, we show that for every non-negative homogeneous neural network, a naive l2-robust criterion for gradients is not normalization invariant, which means that two functions with the same normalized gradient can have different values. Second, we formulate a normalization invariant cosine distance-based criterion and derive its upper bound, which gives insight for why simply minimizing the Hessian norm at the input, as has been done in previous work, is not sufficient for attaining robust feature attribution. Finally, we propose to combine both l2 and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient. As a result, we experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100 without significantly hurting the accuracy, compared to the recent baselines. To the best of our knowledge, this is the first work to verify the robustness of interpretation on a larger-scale dataset beyond CIFAR-10, thanks to the computational efficiency of our method.

Downloads

Published

2023-06-26

How to Cite

Joo, S., Jeong, S., Heo, J., Weller, A., & Moon, T. (2023). Towards More Robust Interpretation via Local Gradient Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8168-8176. https://doi.org/10.1609/aaai.v37i7.25986

Issue

Section

AAAI Technical Track on Machine Learning II