HarmoQ: Harmonized Post-Training Quantization for High-Fidelity Image Super-Resolution
DOI:
https://doi.org/10.1609/aaai.v40i12.37944Abstract
Post-training quantization offers an efficient pathway to deploy super-resolution models, yet existing methods treat weight and activation quantization independently, missing their critical interplay. Through controlled experiments on SwinIR, we uncover a striking asymmetry: weight quantization primarily degrades structural similarity, while activation quantization disproportionately affects pixel-level accuracy. This stems from their distinct roles—weights encode learned restoration priors for textures and edges, whereas activations carry input-specific intensity information. Building on this insight, we propose HarmoQ, a unified framework that harmonizes quantization across components through three synergistic steps: structural residual calibration proactively adjusts weights to compensate for activation-induced detail loss, harmonized scale optimization analytically balances quantization difficulty via closed-form solutions, and adaptive boundary refinement iteratively maintains this balance during optimization. Experiments show HarmoQ achieves substantial gains under aggressive compression, outperforming prior art by 0.46 dB on Set5 at 2-bit while delivering 3.2× speedup and 4× memory reduction on A100 GPUs. This work provides the first systematic analysis of weight-activation coupling in super-resolution quantization and establishes a principled solution for efficient high-quality image restoration.Published
2026-03-14
How to Cite
Wang, H., Chen, J., Song, X., & Zheng, Y. (2026). HarmoQ: Harmonized Post-Training Quantization for High-Fidelity Image Super-Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 40(12), 9802–9810. https://doi.org/10.1609/aaai.v40i12.37944
Issue
Section
AAAI Technical Track on Computer Vision IX