Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis

Authors

  • Zi Wang The University of Tennessee, Knoxville, TN

DOI:

https://doi.org/10.1609/aaai.v35i11.17228

Keywords:

(Deep) Neural Network Algorithms, Learning on the Edge & Model Compression, Classification and Regression

Abstract

Knowledge distillation (KD) has proved to be an effective approach for deep neural network compression, which learns a compact network (student) by transferring the knowledge from a pre-trained, over-parameterized network (teacher). In traditional KD, the transferred knowledge is usually obtained by feeding training samples to the teacher network to obtain the class probabilities. However, the original training dataset is not always available due to storage costs or privacy issues. In this study, we propose a novel data-free KD approach by modeling the intermediate feature space of the teacher with a multivariate normal distribution and leveraging the soft targeted labels generated by the distribution to synthesize pseudo samples as the transfer set. Several student networks trained with these synthesized transfer sets present competitive performance compared to the networks trained with the original training set and other data-free KD approaches.

Downloads

Published

2021-05-18

How to Cite

Wang, Z. (2021). Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 10245-10253. https://doi.org/10.1609/aaai.v35i11.17228

Issue

Section

AAAI Technical Track on Machine Learning IV