Understanding the Role of the Projector in Knowledge Distillation

Authors

  • Roy Miles Imperial College London
  • Krystian Mikolajczyk Imperial College London

DOI:

https://doi.org/10.1609/aaai.v38i5.28219

Keywords:

CV: Learning & Optimization for CV, CV: Applications, CV: Object Detection & Categorization, CV: Other Foundations of Computer Vision, CV: Representation Learning for Vision, ML: Deep Learning Algorithms, ML: Deep Learning Theory

Abstract

In this paper we revisit the efficacy of knowledge distillation as a function matching and metric learning problem. In doing so we verify three important design decisions, namely the normalisation, soft maximum function, and projection layers as key ingredients. We theoretically show that the projector implicitly encodes information on past examples, enabling relational gradients for the student. We then show that the normalisation of representations is tightly coupled with the training dynamics of this projector, which can have a large impact on the students performance. Finally, we show that a simple soft maximum function can be used to address any significant capacity gap problems. Experimental results on various benchmark datasets demonstrate that using these insights can lead to superior or comparable performance to state-of-the-art knowledge distillation techniques, despite being much more computationally efficient. In particular, we obtain these results across image classification (CIFAR100 and ImageNet), object detection (COCO2017), and on more difficult distillation objectives, such as training data efficient transformers, whereby we attain a 77.2% top-1 accuracy with DeiT-Ti on ImageNet. Code and models are publicly available.

Published

2024-03-24

How to Cite

Miles, R., & Mikolajczyk, K. (2024). Understanding the Role of the Projector in Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4233-4241. https://doi.org/10.1609/aaai.v38i5.28219

Issue

Section

AAAI Technical Track on Computer Vision IV