TY - JOUR AU - Li, Gang AU - Li, Xiang AU - Wang, Yujie AU - Zhang, Shanshan AU - Wu, Yichao AU - Liang, Ding PY - 2022/06/28 Y2 - 2024/03/28 TI - Knowledge Distillation for Object Detection via Rank Mimicking and Prediction-Guided Feature Imitation JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 2 SE - AAAI Technical Track on Computer Vision II DO - 10.1609/aaai.v36i2.20018 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20018 SP - 1306-1313 AB - Knowledge Distillation (KD) is a widely-used technology to inherit information from cumbersome teacher models to compact student models, consequently realizing model compression and acceleration. Compared with image classification, object detection is a more complex task, and designing specific KD methods for object detection is non-trivial. In this work, we elaborately study the behaviour difference between the teacher and student detection models, and obtain two intriguing observations: First, the teacher and student rank their detected candidate boxes quite differently, which results in their precision discrepancy. Second, there is a considerable gap between the feature response differences and prediction differences between teacher and student, indicating that equally imitating all the feature maps of the teacher is the sub-optimal choice for improving the student's accuracy. Based on the two observations, we propose Rank Mimicking (RM) and Prediction-guided Feature Imitation (PFI) for distilling one-stage detectors, respectively. RM takes the rank of candidate boxes from teachers as a new form of knowledge to distill, which consistently outperforms the traditional soft label distillation. PFI attempts to correlate feature differences with prediction differences, making feature imitation directly help to improve the student's accuracy. On MS COCO and PASCAL VOC benchmarks, extensive experiments are conducted on various detectors with different backbones to validate the effectiveness of our method. Specifically, RetinaNet with ResNet50 achieves 40.4% mAP on MS COCO, which is 3.5% higher than its baseline, and also outperforms previous KD methods. ER -