Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

Authors

  • Kuluhan Binici National University of Singapore A*STAR Institute For Infocomm Research
  • Shivam Aggarwal National University of Singapore
  • Nam Trung Pham A*STAR Institute For Infocomm Research
  • Karianto Leman A*STAR Institute For Infocomm Research
  • Tulika Mitra National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v36i6.20556

Keywords:

Machine Learning (ML), Computer Vision (CV)

Abstract

Data-Free Knowledge Distillation (KD) allows knowledge transfer from a trained neural network (teacher) to a more compact one (student) in the absence of original training data. Existing works use a validation set to monitor the accuracy of the student over real data and report the highest performance throughout the entire process. However, validation data may not be available at distillation time either, making it infeasible to record the student snapshot that achieved the peak accuracy. Therefore, a practical data-free KD method should be robust and ideally provide monotonically increasing student accuracy during distillation. This is challenging because the student experiences knowledge degradation due to the distribution shift of the synthetic data. A straightforward approach to overcome this issue is to store and rehearse the generated samples periodically, which increases the memory footprint and creates privacy concerns. We propose to model the distribution of the previously observed synthetic samples with a generative network. In particular, we design a Variational Autoencoder (VAE) with a training objective that is customized to learn the synthetic data representations optimally. The student is rehearsed by the generative pseudo replay technique, with samples produced by the VAE. Hence knowledge degradation can be prevented without storing any samples. Experiments on image classification benchmarks show that our method optimizes the expected value of the distilled model accuracy while eliminating the large memory overhead incurred by the sample-storing methods.

Downloads

Published

2022-06-28

How to Cite

Binici, K., Aggarwal, S., Pham, N. T., Leman, K., & Mitra, T. (2022). Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6089-6096. https://doi.org/10.1609/aaai.v36i6.20556

Issue

Section

AAAI Technical Track on Machine Learning I