Rethinking Robustness of Model Attributions

Authors

  • Sandesh Kamath Indian Institute of Technology, Hyderabad
  • Sankalp Mittal Indian Institute of Technology, Hyderabad
  • Amit Deshpande Microsoft Research, Bengaluru
  • Vineeth N Balasubramanian Indian Institute of Technology, Hyderabad

DOI:

https://doi.org/10.1609/aaai.v38i3.28047

Keywords:

CV: Interpretability, Explainability, and Transparency

Abstract

For machine learning models to be reliable and trustworthy, their decisions must be interpretable. As these models find increasing use in safety-critical applications, it is important that not just the model predictions but also their explanations (as feature attributions) be robust to small human-imperceptible input perturbations. Recent works have shown that many attribution methods are fragile and have proposed improvements in either these methods or the model training. We observe two main causes for fragile attributions: first, the existing metrics of robustness (e.g., top-k intersection) overpenalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image. To rectify this, we propose simple ways to strengthen existing metrics and attribution methods that incorporate locality of pixels in robustness metrics and diversity of pixel locations in attributions. Towards the role of model training in attributional robustness, we empirically observe that adversarially trained models have more robust attributions on smaller datasets, however, this advantage disappears in larger datasets. Code is made available at https://github.com/ksandeshk/LENS.

Published

2024-03-24

How to Cite

Kamath, S., Mittal, S., Deshpande, A., & Balasubramanian, V. N. (2024). Rethinking Robustness of Model Attributions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2688-2696. https://doi.org/10.1609/aaai.v38i3.28047

Issue

Section

AAAI Technical Track on Computer Vision II