Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation

Authors

  • Xuan Gong University at Buffalo
  • Abhishek Sharma United Imaging Intelligence
  • Srikrishna Karanam United Imaging Intelligence
  • Ziyan Wu United Imaging Intelligence
  • Terrence Chen United Imaging Intelligence
  • David Doermann University at Buffalo
  • Arun Innanje United Imaging Intelligence

DOI:

https://doi.org/10.1609/aaai.v36i11.21446

Keywords:

AI For Social Impact (AISI Track Papers Only), Computer Vision (CV), Humans And AI (HAI), Philosophy And Ethics Of AI (PEAI)

Abstract

Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model while the training data remains decentralized. Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution. However, they suffer from communication bottlenecks. More importantly, they risk privacy leakage risk. In this work, we develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation using unlabeled, cross-domain, non-sensitive public data. We propose a quantized and noisy ensemble of local predictions from completely trained local models for stronger privacy guarantees without sacrificing accuracy. Based on extensive experiments on image classification and text classification tasks, we show that our method outperforms baseline FL algorithms with superior performance in both accuracy and data privacy preservation.

Downloads

Published

2022-06-28

How to Cite

Gong, X., Sharma, A., Karanam, S., Wu, Z., Chen, T., Doermann, D., & Innanje, A. (2022). Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 11891-11899. https://doi.org/10.1609/aaai.v36i11.21446