TY - JOUR AU - Wang, Zi PY - 2021/05/18 Y2 - 2024/03/28 TI - Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 11 SE - AAAI Technical Track on Machine Learning IV DO - 10.1609/aaai.v35i11.17228 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17228 SP - 10245-10253 AB - Knowledge distillation (KD) has proved to be an effective approach for deep neural network compression, which learns a compact network (student) by transferring the knowledge from a pre-trained, over-parameterized network (teacher). In traditional KD, the transferred knowledge is usually obtained by feeding training samples to the teacher network to obtain the class probabilities. However, the original training dataset is not always available due to storage costs or privacy issues. In this study, we propose a novel data-free KD approach by modeling the intermediate feature space of the teacher with a multivariate normal distribution and leveraging the soft targeted labels generated by the distribution to synthesize pseudo samples as the transfer set. Several student networks trained with these synthesized transfer sets present competitive performance compared to the networks trained with the original training set and other data-free KD approaches. ER -