Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain

Authors

  • Jinyu Tian University of Macau
  • Jiantao Zhou University of Macau
  • Yuanman Li Shenzhen University
  • Jia Duan University of Macau

DOI:

https://doi.org/10.1609/aaai.v35i11.17187

Keywords:

Adversarial Learning & Robustness

Abstract

Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples (AEs), which are maliciously designed to cause dramatic model output errors. In this work, we reveal that normal examples (NEs) are insensitive to the fluctuations occurring at the highly-curved region of the decision boundary, while AEs typically designed over one single domain (mostly spatial domain) exhibit exorbitant sensitivity on such fluctuations. This phenomenon motivates us to design another classifier (called dual classifier) with transformed decision boundary, which can be collaboratively used with the original classifier (called primal classifier) to detect AEs, by virtue of the sensitivity inconsistency. When comparing with the state-of-the-art algorithms based on Local Intrinsic Dimensionality (LID), Mahalanobis Distance (MD), and Feature Squeezing (FS), our proposed Sensitivity Inconsistency Detector (SID) achieves improved AE  detection performance and superior generalization capabilities, especially in the challenging cases where the adversarial perturbation levels are small. Intensive experimental results on ResNet and VGG validate the superiority of the proposed SID.  

Downloads

Published

2021-05-18

How to Cite

Tian, J., Zhou, J., Li, Y., & Duan, J. (2021). Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9877-9885. https://doi.org/10.1609/aaai.v35i11.17187

Issue

Section

AAAI Technical Track on Machine Learning IV