Data-Distortion Guided Self-Distillation for Deep Neural Networks

Authors

  • Ting-Bing Xu Chinese Academy of Sciences
  • Cheng-Lin Liu Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v33i01.33015565

Abstract

Knowledge distillation is an effective technique that has been widely used for transferring knowledge from a network to another network. Despite its effective improvement of network performance, the dependence of accompanying assistive models complicates the training process of single network in the need of large memory and time cost. In this paper, we design a more elegant self-distillation mechanism to transfer knowledge between different distorted versions of same training data without the reliance on accompanying models. Specifically, the potential capacity of single network is excavated by learning consistent global feature distributions and posterior distributions (class probabilities) across these distorted versions of data. Extensive experiments on multiple datasets (i.e., CIFAR-10/100 and ImageNet) demonstrate that the proposed method can effectively improve the generalization performance of various network architectures (such as AlexNet, ResNet, Wide ResNet, and DenseNet), outperform existing distillation methods with little extra training efforts.

Downloads

Published

2019-07-17

How to Cite

Xu, T.-B., & Liu, C.-L. (2019). Data-Distortion Guided Self-Distillation for Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5565-5572. https://doi.org/10.1609/aaai.v33i01.33015565

Issue

Section

AAAI Technical Track: Machine Learning