FracBits: Mixed Precision Quantization via Fractional Bit-Widths

Authors

  • Linjie Yang ByteDance Inc.
  • Qing Jin Northeastern University

DOI:

https://doi.org/10.1609/aaai.v35i12.17269

Keywords:

(Deep) Neural Network Algorithms, General

Abstract

Model quantization helps to reduce model size and latency of deep neural networks. Mixed precision quantization is favorable with customized hardwares supporting arithmetic operations at multiple bit-widths to achieve maximum efficiency. We propose a novel learning-based algorithm to derive mixed precision models end-to-end under target computation constraints and model sizes. During the optimization, the bit-width of each layer / kernel in the model is at a fractional status of two consecutive bit-widths which can be adjusted gradually. With a differentiable regularization term, the resource constraints can be met during the quantization-aware training which results in an optimized mixed precision model. Our final models achieve comparable or better performance than previous quantization methods with mixed precision on MobilenetV1/V2, ResNet18 under different resource constraints on ImageNet dataset.

Downloads

Published

2021-05-18

How to Cite

Yang, L., & Jin, Q. (2021). FracBits: Mixed Precision Quantization via Fractional Bit-Widths. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10612-10620. https://doi.org/10.1609/aaai.v35i12.17269

Issue

Section

AAAI Technical Track on Machine Learning V