Fair Domain Generalization: An Information-Theoretic View

Authors

  • Tangzheng Lian King's College London
  • Guanyu Hu Xi'an Jiaotong University Queen Mary University of London
  • Dimitrios Kollias Queen Mary University of London
  • Xinyu Yang Xi'an Jiaotong University
  • Oya Celiktutan King's College London

DOI:

https://doi.org/10.1609/aaai.v40i28.39508

Abstract

Domain generalization (DG) and algorithmic fairness are two key challenges in machine learning. However, most DG methods focus solely on minimizing expected risk in the unseen target domain, without considering algorithmic fairness. Conversely, fairness methods typically do not account for domain shifts, so the fairness achieved during training may not generalize to unseen test domains. In this work, we bridge these gaps by studying the problem of Fair Domain Generalization (FairDG), which aims to minimize both expected risk and fairness violations in unseen target domains. We derive novel mutual information-based upper bounds for expected risk and fairness violations in multi-class classification tasks with multi-group sensitive attributes. These bounds provide key insights for algorithm design from an information-theoretic perspective. Guided by these insights, we propose a practical method that solves the FairDG problem through Pareto optimization. Experiments on real-world vision and language datasets show that our method achieves superior utility–fairness trade-offs compared to existing approaches.

Published

2026-03-14

How to Cite

Lian, T., Hu, G., Kollias, D., Yang, X., & Celiktutan, O. (2026). Fair Domain Generalization: An Information-Theoretic View. Proceedings of the AAAI Conference on Artificial Intelligence, 40(28), 23382–23390. https://doi.org/10.1609/aaai.v40i28.39508

Issue

Section

AAAI Technical Track on Machine Learning V