TY - JOUR AU - Yu, Fang AU - Huang, Kun AU - Wang, Meng AU - Cheng, Yuan AU - Chu, Wei AU - Cui, Li PY - 2022/06/28 Y2 - 2024/03/29 TI - Width & Depth Pruning for Vision Transformers JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 3 SE - AAAI Technical Track on Computer Vision III DO - 10.1609/aaai.v36i3.20222 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20222 SP - 3143-3151 AB - Transformer models have demonstrated their promising potential and achieved excellent performance on a series of computer vision tasks. However, the huge computational cost of vision transformers hinders their deployment and application to edge devices. Recent works have proposed to find and remove the unimportant units of vision transformers. Despite achieving remarkable results, these methods take one dimension of network width into consideration and ignore network depth, which is another important dimension for pruning vision transformers. Therefore, we propose a Width & Depth Pruning (WDPruning) framework that reduces both width and depth dimensions simultaneously. Specifically, for width pruning, a set of learnable pruning-related parameters is used to adaptively adjust the width of transformer. For depth pruning, we introduce several shallow classifiers by using the intermediate information of the transformer blocks, which allows images to be classified by shallow classifiers instead of the deeper classifiers. In the inference period, all of the blocks after shallow classifiers can be dropped so they don’t bring additional parameters and computation. Experimental results on benchmark datasets demonstrate that the proposed method can significantly reduce the computational costs of mainstream vision transformers such as DeiT and Swin Transformer with a minor accuracy drop. In particular, on ILSVRC-12, we achieve over 22% pruning ratio of FLOPs by compressing DeiT-Base, even with an increase of 0.14% Top-1 accuracy. ER -