Deep Neural Network Compression With Single and Multiple Level Quantization

Authors

  • Yuhui Xu Shanghai Jiao Tong University
  • Yongzhuang Wang Shanghai Jiao Tong University
  • Aojun Zhou University of Chinese Academy of Sciences
  • Weiyao Lin Shanghai Jiao Tong University
  • Hongkai Xiong Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v32i1.11663

Keywords:

Network compression, Network quantization

Abstract

Network quantization is an effective solution to compress deep neural networks for practical usage. Existing network quantization methods cannot sufficiently exploit the depth information to generate low-bit compressed network. In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level. In the width level, parameters are divided into two parts: one for quantization and the other for re-training to eliminate the quantization loss. SLQ leverages the distribution of the parameters to improve the width level. In the depth level, we introduce incremental layer compensation to quantize layers iteratively which decreases the quantization loss in each iteration. The proposed approaches are validated with extensive experiments based on the state-of-the-art neural networks including AlexNet, VGG-16, GoogleNet and ResNet-18. Both SLQ and MLQ achieve impressive results.

Downloads

Published

2018-04-29

How to Cite

Xu, Y., Wang, Y., Zhou, A., Lin, W., & Xiong, H. (2018). Deep Neural Network Compression With Single and Multiple Level Quantization. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11663