Scaled ReLU Matters for Training Vision Transformers

Authors

  • Pichao Wang Alibaba Group
  • Xue Wang Alibaba Group
  • Hao Luo Alibaba Group
  • Jingkai Zhou Alibaba Group
  • Zhipeng Zhou Alibaba Group
  • Fan Wang Alibaba Group
  • Hao Li Alibaba Group
  • Rong Jin Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v36i3.20150

Keywords:

Computer Vision (CV)

Abstract

Vision transformers (ViTs) have been an alternative design paradigm to convolutional neural networks (CNNs). However, the training of ViTs is much harder than CNNs, as it is sensitive to the training parameters, such as learning rate, optimizer and warmup epoch. The reasons for training difficulty are empirically analysed in the paper Early Convolutions Help Transformers See Better, and the authors conjecture that the issue lies with the patchify-stem of ViT models. In this paper, we further investigate this problem and extend the above conclusion: only early convolutions do not help for stable training, but the scaled ReLU operation in the convolutional stem (conv-stem) matters. We verify, both theoretically and empirically, that scaled ReLU in conv-stem not only improves training stabilization, but also increases the diversity of patch tokens, thus boosting peak performance with a large margin via adding few parameters and flops. In addition, extensive experiments are conducted to demonstrate that previous ViTs are far from being well trained, further showing that ViTs have great potential to be a better substitute of CNNs.

Downloads

Published

2022-06-28

How to Cite

Wang, P., Wang, X., Luo, H., Zhou, J., Zhou, Z., Wang, F., Li, H., & Jin, R. (2022). Scaled ReLU Matters for Training Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2495-2503. https://doi.org/10.1609/aaai.v36i3.20150

Issue

Section

AAAI Technical Track on Computer Vision III