Adversarial Attacks on the Interpretation of Neuron Activation Maximization

Authors

  • Geraldin Nanfack Concordia University Mila – Quebec AI Institute
  • Alexander Fulleringer Concordia University Mila – Quebec AI Institute
  • Jonathan Marty Princeton University
  • Michael Eickenberg Flatiron Institute
  • Eugene Belilovsky Concordia University Mila – Quebec AI Institute

DOI:

https://doi.org/10.1609/aaai.v38i5.28228

Keywords:

CV: Interpretability, Explainability, and Transparency, ML: Transparent, Interpretable, Explainable ML

Abstract

Feature visualization is one of the most popular techniques used to interpret the internal behavior of individual units of trained deep neural networks. Based on activation maximization, they consist of finding synthetic or natural inputs that maximize neuron activations. This paper introduces an optimization framework that aims to deceive feature visualization through adversarial model manipulation. It consists of finetuning a pre-trained model with a specifically introduced loss that aims to maintain model performance, while also significantly changing feature visualization. We provide evidence of the success of this manipulation on several pre-trained models for the classification task with ImageNet.

Published

2024-03-24

How to Cite

Nanfack, G., Fulleringer, A., Marty, J., Eickenberg, M., & Belilovsky, E. (2024). Adversarial Attacks on the Interpretation of Neuron Activation Maximization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4315-4324. https://doi.org/10.1609/aaai.v38i5.28228

Issue

Section

AAAI Technical Track on Computer Vision IV