Re-architecting Personalized Federated Learning for Demanding Edge Environments

Authors

  • Quyang Pan Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China University of Chinese Academy of Sciences, Beijing 100190, China
  • Sheng Sun Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
  • Tingting Wi China Mobile Research Institute, Xicheng, Beijing 100053, China
  • Zhiyuan Wu Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China University of Chinese Academy of Sciences, Beijing 100190, China
  • Yuwei Wang Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
  • Min Liu Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
  • Bo Gao School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
  • Jingyuan Wang MOE Engineering Research Center of Advanced Computer Application Technology, SCSE, Beihang University, China

DOI:

https://doi.org/10.1609/aaai.v40i29.39655

Abstract

Federated Edge Learning (FEL) has emerged as a promising approach for enabling edge devices to collaboratively train machine learning models while preserving data privacy. Despite its advantages, practical FEL deployment faces significant challenges related to device constraints and device-server interactions, necessitating heterogeneous, user-adaptive model training with limited and uncertain communication. While knowledge cache-driven federated learning offers a promising FEL solution for demanding edge environments, its logits-based interaction design provides poor richness of exchanged information for on-device model optimization. To tackle this issue, we introduce DistilCacheFL, a novel personalized FEL architecture that enhances the exchange of optimization insights while delivering state-of-the-art performance with efficient communication. DistilCacheFL incorporates the benefits of both dataset distillation and knowledge cache-driven federated learning by storing and organizing distilled data as knowledge in the server-side knowledge cache, allowing devices to periodically download and utilize personalized knowledge for local model optimization. Moreover, a device-centric cache sampling strategy is introduced to tailor transferred knowledge for individual devices within controlled communication bandwidth. Extensive experiments on five datasets covering image recognition, audio understanding, and mobile sensor data mining tasks demonstrate that (1) DistilCacheFL significantly outperforms state-of-the-art methods regardless of model structures, data distributions, and modalities. (2) DistilCacheFL can train splendid personalized on-device models with at least 28.6 improvement in communication efficiency.

Downloads

Published

2026-03-14

How to Cite

Pan, Q., Sun, S., Wi, T., Wu, Z., Wang, Y., Liu, M., Gao, B., & Wang, J. (2026). Re-architecting Personalized Federated Learning for Demanding Edge Environments. Proceedings of the AAAI Conference on Artificial Intelligence, 40(29), 24700-24708. https://doi.org/10.1609/aaai.v40i29.39655

Issue

Section

AAAI Technical Track on Machine Learning VI