Cross-Layer Distillation with Semantic Calibration

Authors

  • Defang Chen College of Computer Science, Zhejiang University, China. Zhejiang Provincial Key Laboratory of Service Robot. Zhejiang University-LianlianPay Joint Research Center.
  • Jian-Ping Mei College of Computer Science, Zhejiang University of Technology, China.
  • Yuan Zhang College of Computer Science, Zhejiang University, China. Zhejiang University-LianlianPay Joint Research Center.
  • Can Wang College of Computer Science, Zhejiang University, China. Zhejiang Provincial Key Laboratory of Service Robot. Zhejiang University-LianlianPay Joint Research Center.
  • Zhe Wang College of Computer Science, Zhejiang University, China. Zhejiang University-LianlianPay Joint Research Center.
  • Yan Feng College of Computer Science, Zhejiang University, China. Zhejiang University-LianlianPay Joint Research Center.
  • Chun Chen College of Computer Science, Zhejiang University, China. Zhejiang University-LianlianPay Joint Research Center.

DOI:

https://doi.org/10.1609/aaai.v35i8.16865

Keywords:

(Deep) Neural Network Algorithms, Learning on the Edge & Model Compression, Transfer/Adaptation/Multi-task/Meta/Automated Learning, Learning & Optimization for CV

Abstract

Recently proposed knowledge distillation approaches based on feature-map transfer validate that intermediate layers of a teacher model can serve as effective targets for training a student model to obtain better generalization ability. Existing studies mainly focus on particular representation forms for knowledge transfer between manually specified pairs of teacher-student intermediate layers. However, semantics of intermediate layers may vary in different networks and manual association of layers might lead to negative regularization caused by semantic mismatch between certain teacher-student layer pairs. To address this problem, we propose Semantic Calibration for Cross-layer Knowledge Distillation (SemCKD), which automatically assigns proper target layers of the teacher model for each student layer with an attention mechanism. With a learned attention distribution, each student layer distills knowledge contained in multiple layers rather than a single fixed intermediate layer from the teacher model for appropriate cross-layer supervision in training. Consistent improvements over state-of-the-art approaches are observed in extensive experiments with various network architectures for teacher and student models, demonstrating the effectiveness and flexibility of the proposed attention based soft layer association mechanism for cross-layer distillation.

Downloads

Published

2021-05-18

How to Cite

Chen, D., Mei, J.-P., Zhang, Y., Wang, C., Wang, Z., Feng, Y., & Chen, C. (2021). Cross-Layer Distillation with Semantic Calibration. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7028-7036. https://doi.org/10.1609/aaai.v35i8.16865

Issue

Section

AAAI Technical Track on Machine Learning I