MIA-Former: Efficient and Robust Vision Transformers via Multi-Grained Input-Adaptation

Authors

  • Zhongzhi Yu Rice University
  • Yonggan Fu Rice University
  • Sicheng Li Alibaba group
  • Chaojian Li Rice University
  • Yingyan Lin Rice University

DOI:

https://doi.org/10.1609/aaai.v36i8.20879

Keywords:

Machine Learning (ML)

Abstract

Vision transformers have recently demonstrated great success in various computer vision tasks, motivating a tremendously increased interest in their deployment into many real-world IoT applications. However, powerful ViTs are often too computationally expensive to be fitted onto real-world resource-constrained platforms, due to (1) their quadratically increased complexity with the number of input tokens and (2) their overparameterized self-attention heads and model depth. In parallel, different images are of varied complexity and their different regions can contain various levels of visual information, e.g., a sky background is not as informative as a foreground object in object classification tasks, indicating that treating those regions equally in terms of model complexity is unnecessary while such opportunities for trimming down ViTs' complexity have not been fully exploited. To this end, we propose a Multi-grained Input-Adaptive Vision Transformer framework dubbed MIA-Former that can input-adaptively adjust the structure of ViTs at three coarse-to-fine-grained granularities (i.e., model depth and the number of model heads/tokens). In particular, our MIA-Former adopts a low-cost network trained with a hybrid supervised and reinforcement learning method to skip the unnecessary layers, heads, and tokens in an input adaptive manner, reducing the overall computational cost. Furthermore, an interesting side effect of our MIA-Former is that its resulting ViTs are naturally equipped with improved robustness against adversarial attacks over their static counterparts, because MIA-Former's multi-grained dynamic control improves the model diversity similar to the effect of ensemble and thus increases the difficulty of adversarial attacks against all its sub-models. Extensive experiments and ablation studies validate that the proposed MIA-Former framework can (1) effectively allocate adaptive computation budgets to the difficulty of input images, achieving state-of-the-art (SOTA) accuracy-efficiency trade-offs, e.g., up to 16.5\% computation savings with the same or even a higher accuracy compared with the SOTA dynamic transformer models, and (2) boost ViTs' robustness accuracy under various adversarial attacks over their vanilla counterparts by 2.4\% and 3.0\%, respectively. Our code is available at https://github.com/RICE-EIC/MIA-Former.

Downloads

Published

2022-06-28

How to Cite

Yu, Z., Fu, Y., Li, S., Li, C., & Lin, Y. (2022). MIA-Former: Efficient and Robust Vision Transformers via Multi-Grained Input-Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8962-8970. https://doi.org/10.1609/aaai.v36i8.20879

Issue

Section

AAAI Technical Track on Machine Learning III