TY - JOUR AU - Passban, Peyman AU - Wu, Yimeng AU - Rezagholizadeh, Mehdi AU - Liu, Qun PY - 2021/05/18 Y2 - 2024/03/29 TI - ALP-KD: Attention-Based Layer Projection for Knowledge Distillation JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 15 SE - AAAI Technical Track on Speech and Natural Language Processing II DO - 10.1609/aaai.v35i15.17610 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17610 SP - 13657-13665 AB - Knowledge distillation is considered as a training and compression strategy in which two neural networks, namely a teacher and a student, are coupled together during training. The teacher network is supposed to be a trustworthy predictor and the student tries to mimic its predictions. Usually, a student with a lighter architecture is selected so we can achieve compression and yet deliver high-quality results. In such a setting, distillation only happens for final predictions whereas the student could also benefit from teacher’s supervision for internal components.Motivated by this, we studied the problem of distillation for intermediate layers. Since there might not be a one-to-one alignment between student and teacher layers, existing techniques skip some teacher layers and only distill from a subset of them. This shortcoming directly impacts quality, so we instead propose a combinatorial technique which relies on attention. Our model fuses teacher-side information and takes each layer’s significance into consideration, then it performs distillation between combined teacher layers and those of the student. Using our technique, we distilled a 12-layer BERT (Devlin et al. 2019) into 6-, 4-, and 2-layer counterparts and evaluated them on GLUE tasks (Wang et al. 2018). Experimental results show that our combinatorial approach is able to outperform other existing techniques. ER -