DSFedMed: Dual-Scale Federated Medical Image Segmentation via Mutual Distillation Between Foundation and Lightweight Models

Authors

  • Hanwen Zhang Peking University
  • Qiaojin Shen Peking University
  • Yuxi Liu Peking University
  • Yuesheng Zhu Peking University
  • Guibo Luo Peking University

DOI:

https://doi.org/10.1609/aaai.v40i15.38239

Abstract

Foundation Models (FMs) have demonstrated strong generalization across diverse vision tasks. However, their deployment in federated settings is hindered by high computational demands, substantial communication overhead, and significant inference costs. We propose DSFedMed, a dual-scale federated framework that enables mutual knowledge distillation between a centralized foundation model and lightweight client models for medical image segmentation. To support knowledge distillation, a set of high-quality medical images is generated to replace real public datasets, and a learnability-guided sample selection strategy is proposed to enhance efficiency and effectiveness in dual-scale distillation. This mutual distillation enables the foundation model to transfer general knowledge to lightweight clients, while also incorporating client-specific insights to refine the foundation model. Evaluations on five medical imaging segmentation datasets show that DSFedMed achieves an average 2 percent improvement in Dice score while reducing communication costs and inference time by nearly 90 percent compared to existing federated foundation model baselines. These results demonstrate significant efficiency gains and scalability for resource-limited federated deployments.

Published

2026-03-14

How to Cite

Zhang, H., Shen, Q., Liu, Y., Zhu, Y., & Luo, G. (2026). DSFedMed: Dual-Scale Federated Medical Image Segmentation via Mutual Distillation Between Foundation and Lightweight Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 12457–12465. https://doi.org/10.1609/aaai.v40i15.38239

Issue

Section

AAAI Technical Track on Computer Vision XII