Make RepVGG Greater Again: A Quantization-Aware Approach

Authors

  • Xiangxiang Chu Meituan
  • Liang Li Meituan
  • Bo Zhang Meituan

DOI:

https://doi.org/10.1609/aaai.v38i10.29045

Keywords:

ML: Deep Neural Architectures and Foundation Models, ML: Classification and Regression

Abstract

The tradeoff between performance and inference speed is critical for practical applications. Architecture reparameterization obtains better tradeoffs and it is becoming an increasingly popular ingredient in modern convolutional neural networks. Nonetheless, its quantization performance is usually too poor to deploy (e.g. more than 20% top-1 accuracy drop on ImageNet) when INT8 inference is desired. In this paper, we dive into the underlying mechanism of this failure, where the original design inevitably enlarges quantization error. We propose a simple, robust, and effective remedy to have a quantization-friendly structure that also enjoys reparameterization benefits. Our method greatly bridges the gap between INT8 and FP32 accuracy for RepVGG. Without bells and whistles, the top-1 accuracy drop on ImageNet is reduced within 2% by standard post-training quantization. Extensive experiments on detection and semantic segmentation tasks verify its generalization.

Published

2024-03-24

How to Cite

Chu, X., Li, L., & Zhang, B. (2024). Make RepVGG Greater Again: A Quantization-Aware Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11624-11632. https://doi.org/10.1609/aaai.v38i10.29045

Issue

Section

AAAI Technical Track on Machine Learning I