Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Authors

  • Zhenglun Kong Northeastern University
  • Haoyu Ma University of California, Irvine
  • Geng Yuan Northeastern University
  • Mengshu Sun Northeastern University
  • Yanyue Xie Northeastern University
  • Peiyan Dong Northeastern University
  • Xin Meng Peking university
  • Xuan Shen Northeastern University
  • Hao Tang ETH Zurich
  • Minghai Qin Western Digital Research
  • Tianlong Chen Unversity of Texas at Austin
  • Xiaolong Ma Clemson University
  • Xiaohui Xie University of California, Irvine
  • Zhangyang Wang University of Texas at Austin
  • Yanzhi Wang Northeastern University

DOI:

https://doi.org/10.1609/aaai.v37i7.26008

Keywords:

ML: Learning on the Edge & Model Compression, CV: Applications, CV: Object Detection & Categorization

Abstract

Vision transformers (ViTs) have recently obtained success in many applications, but their intensive computation and heavy memory usage at both training and inference time limit their generalization. Previous compression algorithms usually start from the pre-trained dense models and only focus on efficient inference, while time-consuming training is still unavoidable. In contrast, this paper points out that the million-scale training data is redundant, which is the fundamental reason for the tedious training. To address the issue, this paper aims to introduce sparsity into data and proposes an end-to-end efficient training framework from three sparse perspectives, dubbed Tri-Level E-ViT. Specifically, we leverage a hierarchical data redundancy reduction scheme, by exploring the sparsity under three levels: number of training examples in the dataset, number of patches (tokens) in each example, and number of connections between tokens that lie in attention weights. With extensive experiments, we demonstrate that our proposed technique can noticeably accelerate training for various ViT architectures while maintaining accuracy. Remarkably, under certain ratios, we are able to improve the ViT accuracy rather than compromising it. For example, we can achieve 15.2% speedup with 72.6% (+0.4) Top-1 accuracy on Deit-T, and 15.7% speedup with 79.9% (+0.1) Top-1 accuracy on Deit-S. This proves the existence of data redundancy in ViT. Our code
is released at https://github.com/ZLKong/Tri-Level-ViT

Downloads

Published

2023-06-26

How to Cite

Kong, Z., Ma, H., Yuan, G., Sun, M., Xie, Y., Dong, P., Meng, X., Shen, X., Tang, H., Qin, M., Chen, T., Ma, X., Xie, X., Wang, Z., & Wang, Y. (2023). Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8360-8368. https://doi.org/10.1609/aaai.v37i7.26008

Issue

Section

AAAI Technical Track on Machine Learning II