Towards Imperceptible and Robust Adversarial Example Attacks Against Neural Networks

Authors

  • Bo Luo The Chinese University of Hong Kong
  • Yannan Liu The Chinese University of Hong Kong
  • Lingxiao Wei The Chinese University of Hong Kong
  • Qiang Xu The Chinese University of Hong Kong

Keywords:

machine learning, adversarial, security, DNN

Abstract

Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use simple metrics to evaluate the distances between the original examples and the adversarial ones, which could be easily detected by human eyes. In addition, these attacks are often not robust due to the inevitable noises and deviation in the physical world. In this work, we present a new adversarial example attack crafting method, which takes the human perceptual system into consideration and maximizes the noise tolerance of the crafted adversarial example. Experimental results demonstrate the efficacy of the proposed technique.

Downloads

Published

2018-04-25

How to Cite

Luo, B., Liu, Y., Wei, L., & Xu, Q. (2018). Towards Imperceptible and Robust Adversarial Example Attacks Against Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11499