Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation

Authors

  • Liang Chen Peking University
  • Yihang Lou Huawei
  • Jianzhong He Huawei
  • Tao Bai Huawei
  • Minghua Deng Peking University

DOI:

https://doi.org/10.1609/aaai.v36i6.20575

Keywords:

Machine Learning (ML), Computer Vision (CV)

Abstract

Universal domain adaptation (UniDA) aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain without any constraints on the label sets. However, domain shift and category shift make UniDA extremely challenging, mainly attributed to the requirement of identifying both shared “known” samples and private “unknown” samples. Previous methods barely exploit the intrinsic manifold structure relationship between two domains for feature alignment, and they rely on the softmax-based scores with class competition nature to detect underlying “unknown” samples. Therefore, in this paper, we propose a novel evidential neighborhood contrastive learning framework called TNT to address these issues. Specifically, TNT first proposes a new domain alignment principle: semantically consistent samples should be geometrically adjacent to each other, whether within or across domains. From this criterion, a cross-domain multi-sample contrastive loss based on mutual nearest neighbors is designed to achieve common category matching and private category separation. Second, toward accurate “unknown” sample detection, TNT introduces a class competition-free uncertainty score from the perspective of evidential deep learning. Instead of setting a single threshold, TNT learns a category-aware heterogeneous threshold vector to reject diverse “unknown” samples. Extensive experiments on three benchmarks demonstrate that TNT significantly outperforms previous state-of-the-art UniDA methods.

Downloads

Published

2022-06-28

How to Cite

Chen, L., Lou, Y., He, J., Bai, T., & Deng, M. (2022). Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6258-6267. https://doi.org/10.1609/aaai.v36i6.20575

Issue

Section

AAAI Technical Track on Machine Learning I