Learning Universal Adversarial Perturbation by Adversarial Example

Authors

  • Maosen Li Xidian University
  • Yanhua Yang Xidian University
  • Kun Wei Xidian University
  • Xu Yang Xidian University
  • Heng Huang University of Pittsburgh

DOI:

https://doi.org/10.1609/aaai.v36i2.20023

Keywords:

Computer Vision (CV)

Abstract

Deep learning models have shown to be susceptible to universal adversarial perturbation (UAP), which has aroused wide concerns in the community. Compared with the conventional adversarial attacks that generate adversarial samples at the instance level, UAP can fool the target model for different instances with only a single perturbation, enabling us to evaluate the robustness of the model from a more effective and accurate perspective. The existing universal attack methods fail to exploit the differences and connections between the instance and universal levels to produce dominant perturbations. To address this challenge, we propose a new universal attack method that unifies instance-specific and universal attacks from a feature perspective to generate a more dominant UAP. Specifically, we reformulate the UAP generation task as a minimax optimization problem and then utilize the instance-specific attack method to solve the minimization problem thereby obtaining better training data for generating UAP. At the same time, we also introduce a consistency regularizer to explore the relationship between training data, thus further improving the dominance of the generated UAP. Furthermore, our method is generic with no additional assumptions about the training data and hence can be applied to both data-dependent (supervised) and data-independent (unsupervised) manners. Extensive experiments demonstrate that the proposed method improves the performance by a significant margin over the existing methods in both data-dependent and data-independent settings. Code is available at https://github.com/lisenxd/AT-UAP.

Downloads

Published

2022-06-28

How to Cite

Li, M., Yang, Y., Wei, K., Yang, X., & Huang, H. (2022). Learning Universal Adversarial Perturbation by Adversarial Example. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1350-1358. https://doi.org/10.1609/aaai.v36i2.20023

Issue

Section

AAAI Technical Track on Computer Vision II