Exploring One-Shot Semi-supervised Federated Learning with Pre-trained Diffusion Models

Authors

  • Mingzhao Yang Fudan University
  • Shangchao Su Fudan University
  • Bin Li Fudan University
  • Xiangyang Xue Fudan University

DOI:

https://doi.org/10.1609/aaai.v38i15.29568

Keywords:

ML: Distributed Machine Learning & Federated Learning, CV: Bias, Fairness & Privacy, CV: Large Vision Models

Abstract

Recently, semi-supervised federated learning (semi-FL) has been proposed to handle the commonly seen real-world scenarios with labeled data on the server and unlabeled data on the clients. However, existing methods face several challenges such as communication costs, data heterogeneity, and training pressure on client devices. To address these challenges, we introduce the powerful diffusion models (DM) into semi-FL and propose FedDISC, a Federated Diffusion-Inspired Semi-supervised Co-training method. Specifically, we first extract prototypes of the labeled server data and use these prototypes to predict pseudo-labels of the client data. For each category, we compute the cluster centroids and domain-specific representations to signify the semantic and stylistic information of their distributions. After adding noise, these representations are sent back to the server, which uses the pre-trained DM to generate synthetic datasets complying with the client distributions and train a global model on it. With the assistance of vast knowledge within DM, the synthetic datasets have comparable quality and diversity to the client images, subsequently enabling the training of global models that achieve performance equivalent to or even surpassing the ceiling of supervised centralized training. FedDISC works within one communication round, does not require any local training, and involves very minimal information uploading, greatly enhancing its practicality. Extensive experiments on three large-scale datasets demonstrate that FedDISC effectively addresses the semi-FL problem on non-IID clients and outperforms the compared SOTA methods. Sufficient visualization experiments also illustrate that the synthetic dataset generated by FedDISC exhibits comparable diversity and quality to the original client dataset, with a neglectable possibility of leaking privacy-sensitive information of the clients.

Downloads

Published

2024-03-24

How to Cite

Yang, M., Su, S., Li, B., & Xue, X. (2024). Exploring One-Shot Semi-supervised Federated Learning with Pre-trained Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16325-16333. https://doi.org/10.1609/aaai.v38i15.29568

Issue

Section

AAAI Technical Track on Machine Learning VI