AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates

Authors

  • Ning Liu DiDi AI Labs
  • Xiaolong Ma Northeastern University
  • Zhiyuan Xu Syracuse University
  • Yanzhi Wang Northeastern University
  • Jian Tang DiDi AI Labs
  • Jieping Ye DiDi AI Labs

DOI:

https://doi.org/10.1609/aaai.v34i04.5924

Abstract

Structured weight pruning is a representative model compression technique of DNNs to reduce the storage and computation requirements and accelerate inference. An automatic hyperparameter determination process is necessary due to the large number of flexible hyperparameters. This work proposes AutoCompress, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets demonstrate that AutoCompress is the key to achieve ultra-high pruning rates on the number of weights and FLOPs that cannot be achieved before. As an example, AutoCompress outperforms the prior work on automatic model compression by up to 33× in pruning rate (120× reduction in the actual parameter count) under the same accuracy. Significant inference speedup has been observed from the AutoCompress framework on actual measurements on smartphone. We release models of this work at anonymous link: http://bit.ly/2VZ63dS.

Downloads

Published

2020-04-03

How to Cite

Liu, N., Ma, X., Xu, Z., Wang, Y., Tang, J., & Ye, J. (2020). AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4876-4883. https://doi.org/10.1609/aaai.v34i04.5924

Issue

Section

AAAI Technical Track: Machine Learning