Domain Generalization via Conditional Invariant Representations

Authors

  • Ya Li University of Science and Technology of China
  • Mingming Gong Carnegie Mellon University; University of Pittsburgh
  • Xinmei Tian University of Science and Technology of China
  • Tongliang Liu University of Sydney
  • Dacheng Tao University of Sydney

DOI:

https://doi.org/10.1609/aaai.v32i1.11682

Abstract

Domain generalization aims to apply knowledge gained from multiple labeled source domains to unseen target domains. The main difficulty comes from the dataset bias: training data and test data have different distributions, and the training set contains heterogeneous samples from different distributions. Let X denote the features, and Y be the class labels. Existing domain generalization methods address the dataset bias problem by learning a domain-invariant representation h(X) that has the same marginal distribution P(h(X)) across multiple source domains. The functional relationship encoded in P(Y|X) is usually assumed to be stable across domains such that P(Y|h(X)) is also invariant. However, it is unclear whether this assumption holds in practical problems. In this paper, we consider the general situation where both P(X) and P(Y|X) can change across all domains. We propose to learn a feature representation which has domain-invariant class conditional distributions P(h(X)|Y). With the conditional invariant representation, the invariance of the joint distribution P(h(X),Y) can be guaranteed if the class prior P(Y) does not change across training and test domains. Extensive experiments on both synthetic and real data demonstrate the effectiveness of the proposed method.

Downloads

Published

2018-04-29

How to Cite

Li, Y., Gong, M., Tian, X., Liu, T., & Tao, D. (2018). Domain Generalization via Conditional Invariant Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11682