PPFL: A Parameter Behavior-Driven Plug-in Personalization Engine for Federated Learning
DOI:
https://doi.org/10.1609/aaai.v40i24.39073Abstract
Personalized Federated Learning (PFL) customizes models for each client to mitigate challenges from non-IID data, wherein a dominant strategy is model decoupling that partitions models into shared and personalized parts based on architectural priors (e.g., backbone vs. head). However, we reveal a critical flaw in this strategy: it induces "intrinsic drift," a performance degradation often more severe than the well-known client drift, which limits final accuracy. We trace this drift to a steep cliff of high loss emerging from the naive stitching of shared and personalized parts. To address this, we shift from architectural partitioning to a parameter behavior-driven paradigm. We introduce PPFL, an approach that employs a novel soft-fusion strategy guided by parameter-wise behavioral perception. PPFL dynamically infers each parameter's functional role—whether it behaves more like a 'personalist' or a 'generalist' in the current context—by synthesizing its multifaceted behavior observed during local training. Extensive experiments on image, text, and multimodal classification benchmarks show that PPFL outperforms eight state-of-the-art baselines by up to 5.3%. Moreover, it can function as a plug-in module, boosting the accuracy of vanilla FedAvg with a 16.82% absolute gain.Downloads
Published
2026-03-14
How to Cite
Cao, Q., Zhu, Z., Lian, Z., Zhang, R., Li, B., Xiong, Y., & Zhou, X. (2026). PPFL: A Parameter Behavior-Driven Plug-in Personalization Engine for Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(24), 19898-19906. https://doi.org/10.1609/aaai.v40i24.39073
Issue
Section
AAAI Technical Track on Machine Learning I