Multi-Source Collaborative Gradient Discrepancy Minimization for Federated Domain Generalization

Authors

  • Yikang Wei Tianjin University
  • Yahong Han Tianjin University

DOI:

https://doi.org/10.1609/aaai.v38i14.29510

Keywords:

ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Representation Learning for Vision, ML: Classification and Regression

Abstract

Federated Domain Generalization aims to learn a domain-invariant model from multiple decentralized source domains for deployment on unseen target domain. Due to privacy concerns, the data from different source domains are kept isolated, which poses challenges in bridging the domain gap. To address this issue, we propose a Multi-source Collaborative Gradient Discrepancy Minimization (MCGDM) method for federated domain generalization. Specifically, we propose intra-domain gradient matching between the original images and augmented images to avoid overfitting the domain-specific information within isolated domains. Additionally, we propose inter-domain gradient matching with the collaboration of other domains, which can further reduce the domain shift across decentralized domains. Combining intra-domain and inter-domain gradient matching, our method enables the learned model to generalize well on unseen domains. Furthermore, our method can be extended to the federated domain adaptation task by fine-tuning the target model on the pseudo-labeled target domain. The extensive experiments on federated domain generalization and adaptation indicate that our method outperforms the state-of-the-art methods significantly.

Downloads

Published

2024-03-24

How to Cite

Wei, Y., & Han, Y. (2024). Multi-Source Collaborative Gradient Discrepancy Minimization for Federated Domain Generalization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15805–15813. https://doi.org/10.1609/aaai.v38i14.29510

Issue

Section

AAAI Technical Track on Machine Learning V