ML-LOO: Detecting Adversarial Examples with Feature Attribution

Authors

  • Puyudi Yang University of California, Davis
  • Jianbo Chen University of California, Berkeley
  • Cho-Jui Hsieh University of California, Los Angeles
  • Jane-Ling Wang University of California, Davis
  • Michael Jordan University of California, Berkeley

DOI:

https://doi.org/10.1609/aaai.v34i04.6140

Abstract

Deep neural networks obtain state-of-the-art performance on a series of tasks. However, they are easily fooled by adding a small adversarial perturbation to the input. The perturbation is often imperceptible to humans on image data. We observe a significant difference in feature attributions between adversarially crafted examples and original examples. Based on this observation, we introduce a new framework to detect adversarial examples through thresholding a scale estimate of feature attribution scores. Furthermore, we extend our method to include multi-layer feature attributions in order to tackle attacks that have mixed confidence levels. As demonstrated in extensive experiments, our method achieves superior performances in distinguishing adversarial examples from popular attack methods on a variety of real data sets compared to state-of-the-art detection methods. In particular, our method is able to detect adversarial examples of mixed confidence levels, and transfer between different attacking methods. We also show that our method achieves competitive performance even when the attacker has complete access to the detector.

Downloads

Published

2020-04-03

How to Cite

Yang, P., Chen, J., Hsieh, C.-J., Wang, J.-L., & Jordan, M. (2020). ML-LOO: Detecting Adversarial Examples with Feature Attribution. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6639-6647. https://doi.org/10.1609/aaai.v34i04.6140

Issue

Section

AAAI Technical Track: Machine Learning