Transportable Representations for Domain Generalization
DOI:
https://doi.org/10.1609/aaai.v38i11.29175Keywords:
ML: Causal Learning, ML: Transfer, Domain Adaptation, Multi-Task Learning, RU: CausalityAbstract
One key assumption in machine learning literature is that the testing and training data come from the same distribution, which is often violated in practice. The anchors that allow generalizations to take place are causal, and provenient in terms of the stability and modularity of the mechanisms underlying the system of variables. Building on the theory of causal transportability, we define the notion of ``transportable representations", and show that these representations are suitable candidates for the domain generalization task. Specifically, considering that the graphical assumptions about the underlying system are provided, the transportable representations can be characterized accordingly, and the distribution of label conditioned on the representation can be computed in terms of the source distributions. Finally, we relax the assumption of having access to the underlying graph by proving a graphical-invariance duality theorem, which delineates certain probabilistic invariances present in the source data as a sound and complete criterion for generalizable classification. Our findings provide a unifying theoretical basis for several existing approaches to the domain generalization problem.Downloads
Published
2024-03-24
How to Cite
Jalaldoust, K., & Bareinboim, E. (2024). Transportable Representations for Domain Generalization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12790-12800. https://doi.org/10.1609/aaai.v38i11.29175
Issue
Section
AAAI Technical Track on Machine Learning II