Hierarchical Knowledge Squeezed Adversarial Network Compression

Authors

  • Peng Li East China Normal University
  • Chang Shu East China Normal University
  • Yuan Xie East China Normal University
  • Yan Qu Xiamen University
  • Hui Kong Horizon Robotics

DOI:

https://doi.org/10.1609/aaai.v34i07.6799

Abstract

Deep network compression has been achieved notable progress via knowledge distillation, where a teacher-student learning manner is adopted by using predetermined loss. Recently, more focuses have been transferred to employ the adversarial training to minimize the discrepancy between distributions of output from two networks. However, they always emphasize on result-oriented learning while neglecting the scheme of process-oriented learning, leading to the loss of rich information contained in the whole network pipeline. Whereas in other (non GAN-based) process-oriented methods, the knowledge have usually been transferred in a redundant manner. Observing that, the small network can not perfectly mimic a large one due to the huge gap of network scale, we propose a knowledge transfer method, involving effective intermediate supervision, under the adversarial training framework to learn the student network. Different from the other intermediate supervision methods, we design the knowledge representation in a compact form by introducing a task-driven attention mechanism. Meanwhile, to improve the representation capability of the attention-based method, a hierarchical structure is utilized so that powerful but highly squeezed knowledge is realized and the knowledge from teacher network could accommodate the size of student network. Extensive experimental results on three typical benchmark datasets, i.e., CIFAR-10, CIFAR-100, and ImageNet, demonstrate that our method achieves highly superior performances against state-of-the-art methods.

Downloads

Published

2020-04-03

How to Cite

Li, P., Shu, C., Xie, Y., Qu, Y., & Kong, H. (2020). Hierarchical Knowledge Squeezed Adversarial Network Compression. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11370-11377. https://doi.org/10.1609/aaai.v34i07.6799

Issue

Section

AAAI Technical Track: Vision