Towards Transferable Adversarial Attacks on Vision Transformers
Keywords:Computer Vision (CV)
AbstractVision transformers (ViTs) have demonstrated impressive performance on a series of computer vision tasks, yet they still suffer from adversarial examples. In this paper, we posit that adversarial attacks on transformers should be specially tailored for their architecture, jointly considering both patches and self-attention, in order to achieve high transferability. More specifically, we introduce a dual attack framework, which contains a Pay No Attention (PNA) attack and a PatchOut attack, to improve the transferability of adversarial samples across different ViTs. We show that skipping the gradients of attention during backpropagation can generate adversarial examples with high transferability. In addition, adversarial perturbations generated by optimizing randomly sampled subsets of patches at each iteration achieve higher attack success rates than attacks using all patches. We evaluate the transferability of attacks on state-of-the-art ViTs, CNNs and robustly trained CNNs. The results of these experiments demonstrate that the proposed dual attack can greatly boost transferability between ViTs and from ViTs to CNNs. In addition, the proposed method can easily be combined with existing transfer methods to boost performance.
How to Cite
Wei, Z., Chen, J., Goldblum, M., Wu, Z., Goldstein, T., & Jiang, Y.-G. (2022). Towards Transferable Adversarial Attacks on Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2668-2676. https://doi.org/10.1609/aaai.v36i3.20169
AAAI Technical Track on Computer Vision III