Low Category Uncertainty and High Training Potential Instance Learning for Unsupervised Domain Adaptation

Authors

  • Xinyu Zhang Jilin University
  • Meng Kang Jilin University
  • Shuai Lü Jilin University

DOI:

https://doi.org/10.1609/aaai.v38i15.29630

Keywords:

ML: Transfer, Domain Adaptation, Multi-Task Learning, ML: Unsupervised & Self-Supervised Learning

Abstract

Recently, instance contrastive learning achieves good results in unsupervised domain adaptation. It reduces the distances between positive samples and the anchor, increases the distances between negative samples and the anchor, and learns discriminative feature representations for target samples. However, most recent methods for identifying positive and negative samples are based on whether the pseudo-labels of samples and the pseudo-label of the anchor correspond to the same class. Due to the lack of target labels, many uncertain data are mistakenly labeled during the training process, and many low training potential data are also utilized. To address these problems, we propose Low Category Uncertainty and High Training Potential Instance Learning for Unsupervised Domain Adaptation (LUHP). We first propose a weight to measure the category uncertainty of the target sample. We can effectively filter the samples near the decision boundary through category uncertainty thresholds which are calculated by weights. Then we propose a new loss to focus on samples with high training potential. Finally, for anchors with low category uncertainty, we propose a sample reuse strategy to make the model more robust. We demonstrate the effectiveness of LUHP by showing the results of four datasets widely used in unsupervised domain adaptation.

Published

2024-03-24

How to Cite

Zhang, X., Kang, M., & Lü, S. (2024). Low Category Uncertainty and High Training Potential Instance Learning for Unsupervised Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16881-16889. https://doi.org/10.1609/aaai.v38i15.29630

Issue

Section

AAAI Technical Track on Machine Learning VI