Curriculum Temperature for Knowledge Distillation
DOI:
https://doi.org/10.1609/aaai.v37i2.25236Keywords:
CV: Learning & Optimization for CV, ML: Learning on the Edge & Model CompressionAbstract
Most existing distillation methods ignore the flexible role of the temperature in the loss function and fix it as a hyper-parameter that can be decided by an inefficient grid search. In general, the temperature controls the discrepancy between two distributions and can faithfully determine the difficulty level of the distillation task. Keeping a constant temperature, i.e., a fixed level of task difficulty, is usually sub-optimal for a growing student during its progressive learning stages. In this paper, we propose a simple curriculum-based technique, termed Curriculum Temperature for Knowledge Distillation (CTKD), which controls the task difficulty level during the student's learning career through a dynamic and learnable temperature. Specifically, following an easy-to-hard curriculum, we gradually increase the distillation loss w.r.t. the temperature, leading to increased distillation difficulty in an adversarial manner. As an easy-to-use plug-in technique, CTKD can be seamlessly integrated into existing knowledge distillation frameworks and brings general improvements at a negligible additional computation cost. Extensive experiments on CIFAR-100, ImageNet-2012, and MS-COCO demonstrate the effectiveness of our method.Downloads
Published
2023-06-26
How to Cite
Li, Z., Li, X., Yang, L., Zhao, B., Song, R., Luo, L., Li, J., & Yang, J. (2023). Curriculum Temperature for Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1504-1512. https://doi.org/10.1609/aaai.v37i2.25236
Issue
Section
AAAI Technical Track on Computer Vision II