A High-Efficiency Federated Learning Method Using Complementary Pruning for D2D Communication (Student Abstract)

Authors

  • Xiaoqing Xu Shandong University of Science and Technology
  • Jiaming Pei The University of Sydney
  • Lukun Wang Shandong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v39i28.35318

Abstract

In federated learning, frequent parameter transmission between clients and the server results in significant communication overhead, particularly due to redundancy within the parameters. To address this issue, we propose a Complementary Pruning for Device-to-Device Communication (FedCPD) method. This approach effectively reduces the amount of transmitted parameters by applying complementary pruning techniques on both the server and clients. Additionally, we decrease the communication frequency between clients and the server by employing chain updates among clients (i.e., device-to-device communication). We conducted experiments on the MNIST, FMNIST, CIFAR-10, and CIFAR-100 datasets, and the results demonstrate that our method significantly reduces communication costs while improving model accuracy.

Published

2025-04-11

How to Cite

Xu, X., Pei, J., & Wang, L. (2025). A High-Efficiency Federated Learning Method Using Complementary Pruning for D2D Communication (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 39(28), 29541-29542. https://doi.org/10.1609/aaai.v39i28.35318