Rethinking Data-Free Quantization as a Zero-Sum Game

Authors

  • Biao Qian Hefei University of Technology
  • Yang Wang Hefei University of Technology
  • Richang Hong Hefei University of Technology
  • Meng Wang Hefei University of Technology

DOI:

https://doi.org/10.1609/aaai.v37i8.26136

Keywords:

ML: Learning on the Edge & Model Compression, GTEP: Game Theory, ML: Classification and Regression, ML: Deep Generative Models & Autoencoders

Abstract

Data-free quantization (DFQ) recovers the performance of quantized network (Q) without accessing the real data, but generates the fake sample via a generator (G) by learning from full-precision network (P) instead. However, such sample generation process is totally independence of Q, specialized as failing to consider the adaptability of the generated samples, i.e., beneficial or adversarial, over the learning process of Q, resulting into non-ignorable performance loss. Building on this, several crucial questions --- how to measure and exploit the sample adaptability to Q under varied bit-width scenarios? how to generate the samples with desirable adaptability to benefit the quantized network? --- impel us to revisit DFQ. In this paper, we answer the above questions from a game-theory perspective to specialize DFQ as a zero-sum game between two players --- a generator and a quantized network, and further propose an Adaptability-aware Sample Generation (AdaSG) method. Technically, AdaSG reformulates DFQ as a dynamic maximization-vs-minimization game process anchored on the sample adaptability. The maximization process aims to generate the sample with desirable adaptability, such sample adaptability is further reduced by the minimization process after calibrating Q for performance recovery. The Balance Gap is defined to guide the stationarity of the game process to maximally benefit Q. The theoretical analysis and empirical studies verify the superiority of AdaSG over the state-of-the-arts. Our code is available at https://github.com/hfutqian/AdaSG.

Downloads

Published

2023-06-26

How to Cite

Qian, B., Wang, Y., Hong, R., & Wang, M. (2023). Rethinking Data-Free Quantization as a Zero-Sum Game. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9489-9497. https://doi.org/10.1609/aaai.v37i8.26136

Issue

Section

AAAI Technical Track on Machine Learning III