Defending Backdoor Attacks on Vision Transformer via Patch Processing

Authors

  • Khoa D. Doan VinUniversity
  • Yingjie Lao Clemson University
  • Peng Yang Meta Corporation
  • Ping Li LinkedIn Corporation

DOI:

https://doi.org/10.1609/aaai.v37i1.25125

Keywords:

CV: Bias, Fairness & Privacy

Abstract

Vision Transformers (ViTs) have a radically different architecture with significantly less inductive bias than Convolutional Neural Networks. Along with the improvement in performance, security and robustness of ViTs are also of great importance to study. In contrast to many recent works that exploit the robustness of ViTs against adversarial examples, this paper investigates a representative causative attack, i.e., backdoor. We first examine the vulnerability of ViTs against various backdoor attacks and find that ViTs are also quite vulnerable to existing attacks. However, we observe that the clean-data accuracy and backdoor attack success rate of ViTs respond distinctively to patch transformations before the positional encoding. Then, based on this finding, we propose an effective method for ViTs to defend both patch-based and blending-based trigger backdoor attacks via patch processing. The performances are evaluated on several benchmark datasets, including CIFAR10, GTSRB, and TinyImageNet, which show the proposedds defense is very successful in mitigating backdoor attacks for ViTs. To the best of our knowledge, this paper presents the first defensive strategy that utilizes a unique characteristic of ViTs against backdoor attacks.

Downloads

Published

2023-06-26

How to Cite

Doan, K. D., Lao, Y., Yang, P., & Li, P. (2023). Defending Backdoor Attacks on Vision Transformer via Patch Processing. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 506-515. https://doi.org/10.1609/aaai.v37i1.25125

Issue

Section

AAAI Technical Track on Computer Vision I