Vision Transformers Are Robust Learners

Authors

  • Sayak Paul Carted
  • Pin-Yu Chen IBM Research

DOI:

https://doi.org/10.1609/aaai.v36i2.20103

Keywords:

Computer Vision (CV), Machine Learning (ML)

Abstract

Transformers, composed of multiple self-attention layers, hold strong promises toward a generic learning primitive applicable to different data modalities, including the recent breakthroughs in computer vision achieving state-of-the-art (SOTA) standard accuracy. What remains largely unexplored is their robustness evaluation and attribution. In this work, we study the robustness of the Vision Transformer (ViT) (Dosovitskiy et al. 2021) against common corruptions and perturbations, distribution shifts, and natural adversarial examples. We use six different diverse ImageNet datasets concerning robust classification to conduct a comprehensive performance comparison of ViT(Dosovitskiy et al. 2021) models and SOTA convolutional neural networks (CNNs), Big-Transfer (Kolesnikov et al. 2020). Through a series of six systematically designed experiments, we then present analyses that provide both quantitative andqualitative indications to explain why ViTs are indeed more robust learners. For example, with fewer parameters and similar dataset and pre-training combinations, ViT gives a top-1accuracy of 28.10% on ImageNet-A which is 4.3x higher than a comparable variant of BiT. Our analyses on image masking, Fourier spectrum sensitivity, and spread on discrete cosine energy spectrum reveal intriguing properties of ViT attributing to improved robustness. Code for reproducing our experiments is available at https://git.io/J3VO0.

Downloads

Published

2022-06-28

How to Cite

Paul, S., & Chen, P.-Y. (2022). Vision Transformers Are Robust Learners. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2071-2081. https://doi.org/10.1609/aaai.v36i2.20103

Issue

Section

AAAI Technical Track on Computer Vision II