Attacking Transformers with Feature Diversity Adversarial Perturbation

Authors

  • Chenxing Gao Huazhong University of Science and Technology
  • Hang Zhou Huazhong University of Science & Technology
  • Junqing Yu Huazhong University of Science & Technology
  • YuTeng Ye Huazhong University of Science & Technology
  • Jiale Cai Huazhong University of Science and Technology
  • Junle Wang Tencent
  • Wei Yang Huazhong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v38i3.27947

Keywords:

CV: Adversarial Attacks & Robustness, CV: Object Detection & Categorization

Abstract

Understanding the mechanisms behind Vision Transformer (ViT), particularly its vulnerability to adversarial perturbations, is crucial for addressing challenges in its real-world applications. Existing ViT adversarial attackers rely on labels to calculate the gradient for perturbation, and exhibit low transferability to other structures and tasks. In this paper, we present a label-free white-box attack approach for ViT-based models that exhibits strong transferability to various black-box models, including most ViT variants, CNNs, and MLPs, even for models developed for other modalities. Our inspiration comes from the feature collapse phenomenon in ViTs, where the critical attention mechanism overly depends on the low-frequency component of features, causing the features in middle-to-end layers to become increasingly similar and eventually collapse. We propose the feature diversity attacker to naturally accelerate this process and achieve remarkable performance and transferability.

Published

2024-03-24

How to Cite

Gao, C., Zhou, H., Yu, J., Ye, Y., Cai, J., Wang, J., & Yang, W. (2024). Attacking Transformers with Feature Diversity Adversarial Perturbation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1788-1796. https://doi.org/10.1609/aaai.v38i3.27947

Issue

Section

AAAI Technical Track on Computer Vision II