Federated Unsupervised Domain Generalization Using Global and Local Alignment of Gradients

Authors

  • Farhad Pourpanah Queen's University
  • Mahdiyar Molahasani Queen's University
  • Milad Soltany Queen's University
  • Michael Greenspan Queen's University
  • Ali Etemad Queen's University

DOI:

https://doi.org/10.1609/aaai.v39i19.34197

Abstract

We address the problem of federated domain generalization in an unsupervised setting for the first time. We first theoretically establish a connection between domain shift and alignment of gradients in unsupervised federated learning and show that aligning the gradients at both client and server levels can facilitate the generalization of the model to new (target) domains. Building on this insight, we propose a novel method named FedGaLA, which performs gradient alignment at the client level to encourage clients to learn domain-invariant features, as well as global gradient alignment at the server to obtain a more generalized aggregated model. To empirically evaluate our method, we perform various experiments on four commonly used multi-domain datasets, PACS, OfficeHome, DomainNet, and TerraInc. The results demonstrate the effectiveness of our method which outperforms comparable baselines. Ablation and sensitivity studies demonstrate the impact of different components and parameters in our approach.

Downloads

Published

2025-04-11

How to Cite

Pourpanah, F., Molahasani, M., Soltany, M., Greenspan, M., & Etemad, A. (2025). Federated Unsupervised Domain Generalization Using Global and Local Alignment of Gradients. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 19948–19958. https://doi.org/10.1609/aaai.v39i19.34197

Issue

Section

AAAI Technical Track on Machine Learning V