Adversarial-Learned Loss for Domain Adaptation


  • Minghao Chen Zhejiang University
  • Shuai Zhao Zhejiang University
  • Haifeng Liu Zhejiang University
  • Deng Cai Zhejiang University



Recently, remarkable progress has been made in learning transferable representation across domains. Previous works in domain adaptation are majorly based on two techniques: domain-adversarial learning and self-training. However, domain-adversarial learning only aligns feature distributions between domains but does not consider whether the target features are discriminative. On the other hand, self-training utilizes the model predictions to enhance the discrimination of target features, but it is unable to explicitly align domain distributions. In order to combine the strengths of these two methods, we propose a novel method called Adversarial-Learned Loss for Domain Adaptation (ALDA). We first analyze the pseudo-label method, a typical self-training method. Nevertheless, there is a gap between pseudo-labels and the ground truth, which can cause incorrect training. Thus we introduce the confusion matrix, which is learned through an adversarial manner in ALDA, to reduce the gap and align the feature distributions. Finally, a new loss function is auto-constructed from the learned confusion matrix, which serves as the loss for unlabeled target samples. Our ALDA outperforms state-of-the-art approaches in four standard domain adaptation datasets. Our code is available at




How to Cite

Chen, M., Zhao, S., Liu, H., & Cai, D. (2020). Adversarial-Learned Loss for Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3521-3528.



AAAI Technical Track: Machine Learning