How Does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches?


  • Li Zhong Tsinghua University University of Technology Sydney
  • Zhen Fang University of Technology Sydney
  • Feng Liu University of Technology Sydney
  • Jie Lu University of Technology Sydney
  • Bo Yuan Tsinghua University
  • Guangquan Zhang University of Technology Sydney


Transfer/Adaptation/Multi-task/Meta/Automated Learning


Unsupervised domain adaptation (UDA) aims to train a target classifier with labeled samples from the source domain and unlabeled samples from the target domain. Classical UDA learning bounds show that target risk is upper bounded by three terms: source risk, distribution discrepancy, and combined risk. Based on the assumption that the combined risk is a small fixed value, methods based on this bound train a target classifier by only minimizing estimators of the source risk and the distribution discrepancy. However, the combined risk may increase when minimizing both estimators, which makes the target risk uncontrollable. Hence the target classifier cannot achieve ideal performance if we fail to control the combined risk. To control the combined risk, the key challenge takes root in the unavailability of the labeled samples in the target domain. To address this key challenge, we propose a method named E-MixNet. E-MixNet employs enhanced mixup, a generic vicinal distribution, on the labeled source samples and pseudo-labeled target samples to calculate a proxy of the combined risk. Experiments show that the proxy can effectively curb the increase of the combined risk when minimizing the source risk and distribution discrepancy. Furthermore, we show that if the proxy of the combined risk is added into loss functions of four representative UDA methods, their performance is also improved.




How to Cite

Zhong, L., Fang, Z., Liu, F., Lu, J., Yuan, B., & Zhang, G. (2021). How Does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches?. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11079-11087. Retrieved from



AAAI Technical Track on Machine Learning V