TY - JOUR AU - Chen, Liang AU - Du, Qianjin AU - Lou, Yihang AU - He, Jianzhong AU - Bai, Tao AU - Deng, Minghua PY - 2022/06/28 Y2 - 2024/03/29 TI - Mutual Nearest Neighbor Contrast and Hybrid Prototype Self-Training for Universal Domain Adaptation JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 6 SE - AAAI Technical Track on Machine Learning I DO - 10.1609/aaai.v36i6.20574 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20574 SP - 6248-6257 AB - Universal domain adaptation (UniDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled target domain under domain shift and category shift. Without prior category overlap information, it is challenging to simultaneously align the common categories between two domains and separate their respective private categories. Additionally, previous studies utilize the source classifier's prediction to obtain various known labels and one generic "unknown" label of target samples. However, over-reliance on learned classifier knowledge is inevitably biased to source data, ignoring the intrinsic structure of target domain. Therefore, in this paper, we propose a novel two-stage UniDA framework called MATHS based on the principle of mutual nearest neighbor contrast and hybrid prototype discrimination. In the first stage, we design an efficient mutual nearest neighbor contrastive learning scheme to achieve feature alignment, which exploits the instance-level affinity relationship to uncover the intrinsic structure of two domains. We introduce a bimodality hypothesis for the maximum discriminative probability distribution to detect the possible target private samples, and present a data-based statistical approach to separate the common and private categories. In the second stage, to obtain more reliable label predictions, we propose an incremental pseudo-classifier for target data only, which is driven by the hybrid representative prototypes. A confidence-guided prototype contrastive loss is designed to optimize the category allocation uncertainty via a self-training mechanism. Extensive experiments on three benchmarks demonstrate that MATHS outperforms previous state-of-the-arts on most UniDA settings. ER -