OMPQ: Orthogonal Mixed Precision Quantization

Authors

  • Yuexiao Ma Xiamen University
  • Taisong Jin Xiamen University
  • Xiawu Zheng Peng Cheng Laboratory
  • Yan Wang Samsara
  • Huixia Li Xiamen University
  • Yongjian Wu Tencent Technology (Shanghai) Co.,Ltd
  • Guannan Jiang CATL
  • Wei Zhang CATL
  • Rongrong Ji Xiamen University, China

DOI:

https://doi.org/10.1609/aaai.v37i7.26084

Keywords:

ML: Learning on the Edge & Model Compression

Abstract

To bridge the ever-increasing gap between deep neural networks' complexity and hardware capability, network quantization has attracted more and more research attention. The latest trend of mixed precision quantization takes advantage of hardware's multiple bit-width arithmetic operations to unleash the full potential of network quantization. However, existing approaches rely heavily on an extremely time-consuming search process and various relaxations when seeking the optimal bit configuration. To address this issue, we propose to optimize a proxy metric of network orthogonality that can be efficiently solved with linear programming, which proves to be highly correlated with quantized model accuracy and bit-width. Our approach significantly reduces the search time and the required data amount by orders of magnitude, but without a compromise on quantization accuracy. Specifically, we achieve 72.08% Top-1 accuracy on ResNet-18 with 6.7Mb parameters, which does not require any searching iterations. Given the high efficiency and low data dependency of our algorithm, we use it for the post-training quantization, which achieves 71.27% Top-1 accuracy on MobileNetV2 with only 1.5Mb parameters.

Downloads

Published

2023-06-26

How to Cite

Ma, Y., Jin, T., Zheng, X., Wang, Y., Li, H., Wu, Y., Jiang, G., Zhang, W., & Ji, R. (2023). OMPQ: Orthogonal Mixed Precision Quantization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 9029-9037. https://doi.org/10.1609/aaai.v37i7.26084

Issue

Section

AAAI Technical Track on Machine Learning II