TY - JOUR AU - Xu, Yuhui AU - Wang, Yongzhuang AU - Zhou, Aojun AU - Lin, Weiyao AU - Xiong, Hongkai PY - 2018/04/29 Y2 - 2024/03/29 TI - Deep Neural Network Compression With Single and Multiple Level Quantization JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 32 IS - 1 SE - AAAI Technical Track: Machine Learning DO - 10.1609/aaai.v32i1.11663 UR - https://ojs.aaai.org/index.php/AAAI/article/view/11663 SP - AB - <p> Network quantization is an effective solution to compress deep neural networks for practical usage. Existing network quantization methods cannot sufficiently exploit the depth information to generate low-bit compressed network. In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level. In the width level, parameters are divided into two parts: one for quantization and the other for re-training to eliminate the quantization loss. SLQ leverages the distribution of the parameters to improve the width level. In the depth level, we introduce incremental layer compensation to quantize layers iteratively which decreases the quantization loss in each iteration. The proposed approaches are validated with extensive experiments based on the state-of-the-art neural networks including AlexNet, VGG-16, GoogleNet and ResNet-18. Both SLQ and MLQ achieve impressive results. </p> ER -