Federated Learning via Input-Output Collaborative Distillation

Authors

  • Xuan Gong University at Buffalo, Buffalo, NY, USA Harvard Medical School, Boston, MA, USA
  • Shanglin Li Institute of Artificial Intelligence, Hangzhou Research Institute, Beihang University, Beijing, China
  • Yuxiang Bao Institute of Artificial Intelligence, Hangzhou Research Institute, Beihang University, Beijing, China
  • Barry Yao University at Buffalo, Buffalo, NY, USA Virginia Tech, Blacksburg, VA, USA
  • Yawen Huang Jarvis Research Center, Tencent YouTu Lab, Shenzhen, China
  • Ziyan Wu United Imaging Intelligence, Burlington, MA, USA
  • Baochang Zhang Institute of Artificial Intelligence, Hangzhou Research Institute, Beihang University, Beijing, China Zhongguancun Laboratory, Beijing, China Nanchang Institute of Technology, Nanchang, China
  • Yefeng Zheng Jarvis Research Center, Tencent YouTu Lab, Shenzhen, China
  • David Doermann University at Buffalo, Buffalo, NY, USA

DOI:

https://doi.org/10.1609/aaai.v38i20.30209

Keywords:

General

Abstract

Federated learning (FL) is a machine learning paradigm in which distributed local nodes collaboratively train a central model without sharing individually held private data. Existing FL methods either iteratively share local model parameters or deploy co-distillation. However, the former is highly susceptible to private data leakage, and the latter design relies on the prerequisites of task-relevant real data. Instead, we propose a data-free FL framework based on local-to-central collaborative distillation with direct input and output space exploitation. Our design eliminates any requirement of recursive local parameter exchange or auxiliary task-relevant data to transfer knowledge, thereby giving direct privacy control to local users. In particular, to cope with the inherent data heterogeneity across locals, our technique learns to distill input on which each local model produces consensual yet unique results to represent each expertise. Our proposed FL framework achieves notable privacy-utility trade-offs with extensive experiments on image classification and segmentation tasks under various real-world heterogeneous federated learning settings on both natural and medical images. Code is available at https://github.com/lsl001006/FedIOD.

Published

2024-03-24

How to Cite

Gong, X., Li, S., Bao, Y., Yao, B., Huang, Y., Wu, Z., Zhang, B., Zheng, Y., & Doermann, D. (2024). Federated Learning via Input-Output Collaborative Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22058-22066. https://doi.org/10.1609/aaai.v38i20.30209