EPSD: Early Pruning with Self-Distillation for Efficient Model Compression


  • Dong Chen Jilin University Midea Group
  • Ning Liu Midea Group
  • Yichen Zhu Midea Group
  • Zhengping Che Midea Group
  • Rui Ma Jilin University
  • Fachao Zhang Midea Group
  • Xiaofeng Mou Midea Group
  • Yi Chang Jilin University
  • Jian Tang Midea Group




ML: Learning on the Edge & Model Compression, CV: Applications, CV: Learning & Optimization for CV, ML: Classification and Regression, ML: Deep Learning Algorithms, ML: Multi-class/Multi-label Learning & Extreme Classification, ML: Optimization


Neural network compression techniques, such as knowledge distillation (KD) and network pruning, have received increasing attention. Recent work `Prune, then Distill' reveals that a pruned student-friendly teacher network can benefit the performance of KD. However, the conventional teacher-student pipeline, which entails cumbersome pre-training of the teacher and complicated compression steps, makes pruning with KD less efficient. In addition to compressing models, recent compression techniques also emphasize the aspect of efficiency. Early pruning demands significantly less computational cost in comparison to the conventional pruning methods as it does not require a large pre-trained model. Likewise, a special case of KD, known as self-distillation (SD), is more efficient since it requires no pre-training or student-teacher pair selection. This inspires us to collaborate early pruning with SD for efficient model compression. In this work, we propose the framework named Early Pruning with Self-Distillation (EPSD), which identifies and preserves distillable weights in early pruning for a given SD task. EPSD efficiently combines early pruning and self-distillation in a two-step process, maintaining the pruned network's trainability for compression. Instead of a simple combination of pruning and SD, EPSD enables the pruned network to favor SD by keeping more distillable weights before training to ensure better distillation of the pruned network. We demonstrated that EPSD improves the training of pruned networks, supported by visual and quantitative analyses. Our evaluation covered diverse benchmarks (CIFAR-10/100, Tiny-ImageNet, full ImageNet, CUB-200-2011, and Pascal VOC), with EPSD outperforming advanced pruning and SD techniques.




How to Cite

Chen, D., Liu, N., Zhu, Y., Che, Z., Ma, R., Zhang, F., Mou, X., Chang, Y., & Tang, J. (2024). EPSD: Early Pruning with Self-Distillation for Efficient Model Compression. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11258-11266. https://doi.org/10.1609/aaai.v38i10.29004



AAAI Technical Track on Machine Learning I