Wasserstein Distance Guided Representation Learning for Domain Adaptation


  • Jian Shen Shanghai Jiao Tong University
  • Yanru Qu Shanghai Jiao Tong University
  • Weinan Zhang Shanghai Jiao Tong University
  • Yong Yu Shanghai Jiao Tong University




domain adaptation, wasserstein distance, representation learning


Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.




How to Cite

Shen, J., Qu, Y., Zhang, W., & Yu, Y. (2018). Wasserstein Distance Guided Representation Learning for Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11784