Federated Modality-Specific Encoders and Multimodal Anchors for Personalized Brain Tumor Segmentation

Authors

  • Qian Dai School of informatics, Xiamen University, Xiamen, China
  • Dong Wei Jarvis Research Center, Tencent Youtu Lab / Tencent Healthcare (Shenzhen) Co., Ltd., Shenzhen, China
  • Hong Liu Jarvis Research Center, Tencent Youtu Lab / Tencent Healthcare (Shenzhen) Co., Ltd., Shenzhen, China School of Medicine, Xiamen University, Xiamen, China
  • Jinghan Sun Jarvis Research Center, Tencent Youtu Lab / Tencent Healthcare (Shenzhen) Co., Ltd., Shenzhen, China School of Medicine, Xiamen University, Xiamen, China
  • Liansheng Wang School of informatics, Xiamen University, Xiamen, China
  • Yefeng Zheng Jarvis Research Center, Tencent Youtu Lab / Tencent Healthcare (Shenzhen) Co., Ltd., Shenzhen, China

DOI:

https://doi.org/10.1609/aaai.v38i2.27909

Keywords:

CV: Medical and Biological Imaging, CV: Multi-modal Vision, CV: Segmentation, ML: Distributed Machine Learning & Federated Learning

Abstract

Most existing federated learning (FL) methods for medical image analysis only considered intramodal heterogeneity, limiting their applicability to multimodal imaging applications. In practice, it is not uncommon that some FL participants only possess a subset of the complete imaging modalities, posing inter-modal heterogeneity as a challenge to effectively training a global model on all participants’ data. In addition, each participant would expect to obtain a personalized model tailored for its local data characteristics from the FL in such a scenario. In this work, we propose a new FL framework with federated modality-specific encoders and multimodal anchors (FedMEMA) to simultaneously address the two concurrent issues. Above all, FedMEMA employs an exclusive encoder for each modality to account for the inter-modal heterogeneity in the first place. In the meantime, while the encoders are shared by the participants, the decoders are personalized to meet individual needs. Specifically, a server with full-modal data employs a fusion decoder to aggregate and fuse representations from all modality-specific encoders, thus bridging the modalities to optimize the encoders via backpropagation reversely. Meanwhile, multiple anchors are extracted from the fused multimodal representations and distributed to the clients in addition to the encoder parameters. On the other end, the clients with incomplete modalities calibrate their missing-modal representations toward the global full-modal anchors via scaled dot-product cross-attention, making up the information loss due to absent modalities while adapting the representations of present ones. FedMEMA is validated on the BraTS 2020 benchmark for multimodal brain tumor segmentation. Results show that it outperforms various up-to-date methods for multimodal and personalized FL and that its novel designs are effective. Our code is available.

Published

2024-03-24

How to Cite

Dai, Q., Wei, D., Liu, H., Sun, J., Wang, L., & Zheng, Y. (2024). Federated Modality-Specific Encoders and Multimodal Anchors for Personalized Brain Tumor Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 1445-1453. https://doi.org/10.1609/aaai.v38i2.27909

Issue

Section

AAAI Technical Track on Computer Vision I