Adaptive Quantization for Deep Neural Network


  • Yiren Zhou Singapore University of Technology and Design
  • Seyed-Mohsen Moosavi-Dezfooli École Polytechnique Fédérale de Lausanne
  • Ngai-Man Cheung Singapore University of Technology and Design
  • Pascal Frossard École Polytechnique Fédérale de Lausanne



Deep Model Compression, Deep Model Quantization


In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy.




How to Cite

Zhou, Y., Moosavi-Dezfooli, S.-M., Cheung, N.-M., & Frossard, P. (2018). Adaptive Quantization for Deep Neural Network. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).