Unsupervised Domain Adaptation With Distribution Matching Machines

Authors

  • Yue Cao Tsinghua University
  • Mingsheng Long Tsinghua University
  • Jianmin Wang Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v32i1.11792

Keywords:

Transfer learning, Domain adaptation

Abstract

Domain adaptation generalizes a learning model across source domain and target domain that follow different distributions. Most existing work follows a two-step procedure: first, explores either feature matching or instance reweighting independently, and second, train the transfer classifier separately. In this paper, we show that either feature matching or instance reweighting can only reduce, but not remove, the cross-domain discrepancy, and the knowledge hidden in the relations between the data labels from the source and target domains is important for unsupervised domain adaptation. We propose a new Distribution Matching Machine (DMM) based on the structural risk minimization principle, which learns a transfer support vector machine by extracting invariant feature representations and estimating unbiased instance weights that jointly minimize the cross-domain distribution discrepancy. This leads to a robust transfer learner that performs well against both mismatched features and irrelevant instances. Our theoretical analysis proves that the proposed approach further reduces the generalization error bound of related domain adaptation methods. Comprehensive experiments validate that the DMM approach significantly outperforms competitive methods on standard domain adaptation benchmarks.

Downloads

Published

2018-04-29

How to Cite

Cao, Y., Long, M., & Wang, J. (2018). Unsupervised Domain Adaptation With Distribution Matching Machines. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11792