Prototypical Partial Optimal Transport for Universal Domain Adaptation
DOI:
https://doi.org/10.1609/aaai.v37i9.26287Keywords:
ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Representation Learning for Vision, ML: Unsupervised & Self-Supervised LearningAbstract
Universal domain adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain without requiring the same label sets of both domains. The existence of domain and category shift makes the task challenging and requires us to distinguish “known” samples (i.e., samples whose labels exist in both domains) and “unknown” samples (i.e., samples whose labels exist in only one domain) in both domains before reducing the domain gap. In this paper, we consider the problem from the point of view of distribution matching which we only need to align two distributions partially. A novel approach, dubbed mini-batch Prototypical Partial Optimal Transport (m-PPOT), is proposed to conduct partial distribution alignment for UniDA. In training phase, besides minimizing m-PPOT, we also leverage the transport plan of m-PPOT to reweight source prototypes and target samples, and design reweighted entropy loss and reweighted cross-entropy loss to distinguish “known” and “unknown” samples. Experiments on four benchmarks show that our method outperforms the previous state-of-the-art UniDA methods.Downloads
Published
2023-06-26
How to Cite
Yang, Y., Gu, X., & Sun, J. (2023). Prototypical Partial Optimal Transport for Universal Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10852-10860. https://doi.org/10.1609/aaai.v37i9.26287
Issue
Section
AAAI Technical Track on Machine Learning IV