Denoised Maximum Classifier Discrepancy for Source-Free Unsupervised Domain Adaptation

Authors

  • Tong Chu School of Computer Science and Engineering & Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Yahao Liu School of Computer Science and Engineering & Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Jinhong Deng School of Computer Science and Engineering & Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Wen Li School of Computer Science and Engineering & Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Lixin Duan School of Computer Science and Engineering & Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v36i1.19925

Keywords:

Computer Vision (CV)

Abstract

Source-Free Unsupervised Domain Adaptation(SFUDA) aims to adapt a pre-trained source model to an unlabeled target domain without access to the original labeled source domain samples. Many existing SFUDA approaches apply the self-training strategy, which involves iteratively selecting confidently predicted target samples as pseudo-labeled samples used to train the model to fit the target domain. However, the self-training strategy may also suffer from sample selection bias and be impacted by the label noise of the pseudo-labeled samples. In this work, we provide a rigorous theoretical analysis on how these two issues affect the model generalization ability when applying the self-training strategy for the SFUDA problem. Based on this theoretical analysis, we then propose a new Denoised Maximum Classifier Discrepancy (D-MCD) method for SFUDA to effectively address these two issues. In particular, we first minimize the distribution mismatch between the selected pseudo-labeled samples and the remaining target domain samples to alleviate the sample selection bias. Moreover, we design a strong-weak self-training paradigm to denoise the selected pseudo-labeled samples, where the strong network is used to select pseudo-labeled samples while the weak network helps the strong network to filter out hard samples to avoid incorrect labels. In this way, we are able to ensure both the quality of the pseudo-labels and the generalization ability of the trained model on the target domain. We achieve state-of-the-art results on three domain adaptation benchmark datasets, which clearly validates the effectiveness of our proposed approach. Full code is available at https://github.com/kkkkkkon/D-MCD.

Downloads

Published

2022-06-28

How to Cite

Chu, T., Liu, Y., Deng, J., Li, W., & Duan, L. (2022). Denoised Maximum Classifier Discrepancy for Source-Free Unsupervised Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 472-480. https://doi.org/10.1609/aaai.v36i1.19925

Issue

Section

AAAI Technical Track on Computer Vision I