Up to 100x Faster Data-Free Knowledge Distillation

Authors

  • Gongfan Fang Zhejiang University Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies
  • Kanya Mo Zhejiang University
  • Xinchao Wang National University of Singapore
  • Jie Song Zhejiang University
  • Shitao Bei Zhejiang University
  • Haofei Zhang Zhejiang University
  • Mingli Song Zhejiang University Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies

DOI:

https://doi.org/10.1609/aaai.v36i6.20613

Keywords:

Machine Learning (ML), Computer Vision (CV)

Abstract

Data-free knowledge distillation (DFKD) has recently been attracting increasing attention from research communities, attributed to its capability to compress a model only using synthetic data. Despite the encouraging results achieved, state-of-the-art DFKD methods still suffer from the inefficiency of data synthesis, making the data-free training process extremely time-consuming and thus inapplicable for large-scale tasks. In this work, we introduce an efficacious scheme, termed as FastDFKD, that allows us to accelerate DFKD by a factor of orders of magnitude. At the heart of our approach is a novel strategy to reuse the shared common features in training data so as to synthesize different data instances. Unlike prior methods that optimize a set of data independently, we propose to learn a meta-synthesizer that seeks common features as the initialization for the fast data synthesis. As a result, FastDFKD achieves data synthesis within only a few steps, significantly enhancing the efficiency of data-free training. Experiments over CIFAR, NYUv2, and ImageNet demonstrate that the proposed FastDFKD achieves 10x and even 100x acceleration while preserving performances on par with state of the art. Code is available at https://github.com/zju-vipa/Fast-Datafree.

Downloads

Published

2022-06-28

How to Cite

Fang, G., Mo, K., Wang, X., Song, J., Bei, S., Zhang, H., & Song, M. (2022). Up to 100x Faster Data-Free Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6597-6604. https://doi.org/10.1609/aaai.v36i6.20613

Issue

Section

AAAI Technical Track on Machine Learning I