Deep Neural Network Quantization via Layer-Wise Optimization Using Limited Training Data

Authors

  • Shangyu Chen Nanyang Technological University
  • Wenya Wang Nanyang Technological University
  • Sinno Jialin Pan Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v33i01.33013329

Abstract

The advancement of deep models poses great challenges to real-world deployment because of the limited computational ability and storage space on edge devices. To solve this problem, existing works have made progress to prune or quantize deep models. However, most existing methods rely heavily on a supervised training process to achieve satisfactory performance, acquiring large amount of labeled training data, which may not be practical for real deployment. In this paper, we propose a novel layer-wise quantization method for deep neural networks, which only requires limited training data (1% of original dataset). Specifically, we formulate parameters quantization for each layer as a discrete optimization problem, and solve it using Alternative Direction Method of Multipliers (ADMM), which gives an efficient closed-form solution. We prove that the final performance drop after quantization is bounded by a linear combination of the reconstructed errors caused at each layer. Based on the proved theorem, we propose an algorithm to quantize a deep neural network layer by layer with an additional weights update step to minimize the final error. Extensive experiments on benchmark deep models are conducted to demonstrate the effectiveness of our proposed method using 1% of CIFAR10 and ImageNet datasets. Codes are available in: https://github.com/csyhhu/L-DNQ

Downloads

Published

2019-07-17

How to Cite

Chen, S., Wang, W., & Pan, S. J. (2019). Deep Neural Network Quantization via Layer-Wise Optimization Using Limited Training Data. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3329-3336. https://doi.org/10.1609/aaai.v33i01.33013329

Issue

Section

AAAI Technical Track: Machine Learning