BIQ: Bisection Interval Quantization for Communication-efficient Federated Learning

Authors

  • Luyang Gai Xi'an Jiaotong University
  • Shusen Yang Xi'an Jiaotong University
  • Xuebin Ren Xi'an Jiaotong University
  • Zihao Zhou Xi'an Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v40i25.39259

Abstract

Quantization is a pivotal technique for enhancing communication efficiency in Federated Learning (FL). Traditional quantization methods often set uniform intervals, may fail to adequately characterize non-uniform data distributions, thus leading to substantial estimation errors and degrated model performance. Non-uniform quantization can better solve the problem. However, when applied to FL, it would bring additional communication overheads for the alignment of parameter distributions among distributed models. To address this issue, we propose Bisection Interval Quantization (BIQ), a novel non-uniform quantization framework for FL with great communication efficiency. In particular, BIQ works by optimizing the interval selection through recursive bisection among distributed clients without extra parameter communication. For scenarios involving amounts of boundary inputs, we further design Weighted Bisection Interval Quantization (WBIQ), which incorporates maximum likelihood estimation to refine boundary value reconstruction to enhance the estimation quality of boundary inputs. Our theoretical analysis rigorously establishes, for the first time under biased quantization conditions, that both BIQ and WBIQ achieve tighter error bounds and enhanced stability. Extensive experiments validate that both BIQ and WBIQ significantly accelerate the convergence of FL model training when compared to the state-of-the-art quantizers under both convex and non-convex settings.

Downloads

Published

2026-03-14

How to Cite

Gai, L., Yang, S., Ren, X., & Zhou, Z. (2026). BIQ: Bisection Interval Quantization for Communication-efficient Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 21154–21162. https://doi.org/10.1609/aaai.v40i25.39259

Issue

Section

AAAI Technical Track on Machine Learning II