Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching

Authors

  • Mingi Ji KAIST
  • Byeongho Heo NAVER AI LAB
  • Sungrae Park Upstage AI Research, Upstage AI

DOI:

https://doi.org/10.1609/aaai.v35i9.16969

Keywords:

(Deep) Neural Network Algorithms

Abstract

Knowledge distillation extracts general knowledge from a pretrained teacher network and provides guidance to a target student network. Most studies manually tie intermediate features of the teacher and student, and transfer knowledge through predefined links. However, manual selection often constructs ineffective links that limit the improvement from the distillation. There has been an attempt to address the problem, but it is still challenging to identify effective links under practical scenarios. In this paper, we introduce an effective and efficient feature distillation method utilizing all the feature levels of the teacher without manually selecting the links. Specifically, our method utilizes an attention based meta network that learns relative similarities between features, and applies identified similarities to control distillation intensities of all possible pairs. As a result, our method determines competent links more efficiently than the previous approach and provides better performance on model compression and transfer learning tasks. Further qualitative analyses and ablative studies describe how our method contributes to better distillation.

Downloads

Published

2021-05-18

How to Cite

Ji, M., Heo, B., & Park, S. (2021). Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7945-7952. https://doi.org/10.1609/aaai.v35i9.16969

Issue

Section

AAAI Technical Track on Machine Learning II