Cross-Layer Distillation with Semantic Calibration
Keywords:(Deep) Neural Network Algorithms, Learning on the Edge & Model Compression, Transfer/Adaptation/Multi-task/Meta/Automated Learning, Learning & Optimization for CV
AbstractRecently proposed knowledge distillation approaches based on feature-map transfer validate that intermediate layers of a teacher model can serve as effective targets for training a student model to obtain better generalization ability. Existing studies mainly focus on particular representation forms for knowledge transfer between manually specified pairs of teacher-student intermediate layers. However, semantics of intermediate layers may vary in different networks and manual association of layers might lead to negative regularization caused by semantic mismatch between certain teacher-student layer pairs. To address this problem, we propose Semantic Calibration for Cross-layer Knowledge Distillation (SemCKD), which automatically assigns proper target layers of the teacher model for each student layer with an attention mechanism. With a learned attention distribution, each student layer distills knowledge contained in multiple layers rather than a single fixed intermediate layer from the teacher model for appropriate cross-layer supervision in training. Consistent improvements over state-of-the-art approaches are observed in extensive experiments with various network architectures for teacher and student models, demonstrating the effectiveness and flexibility of the proposed attention based soft layer association mechanism for cross-layer distillation.
How to Cite
Chen, D., Mei, J.-P., Zhang, Y., Wang, C., Wang, Z., Feng, Y., & Chen, C. (2021). Cross-Layer Distillation with Semantic Calibration. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7028-7036. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16865
AAAI Technical Track on Machine Learning I