DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training

Authors

  • Fei Yu Peking University
  • Mo Zhang Peking University
  • Hexin Dong Peking University
  • Sheng Hu Peking University
  • Bin Dong Peking University Beijing International Center for Mathematical Research(BICMR)
  • Li Zhang Peking University

DOI:

https://doi.org/10.1609/aaai.v35i12.17285

Keywords:

Transfer/Adaptation/Multi-task/Meta/Automated Learning, Segmentation

Abstract

Unsupervised domain adaption has recently been used to reduce the domain shift, which would ultimately improve the performance of the semantic segmentation on unlabeled real-world data. In this paper, we follow the trend to propose a novel method to reduce the domain shift using strategies of discriminator attention and self-training. The discriminator attention strategy contains a two-stage adversarial learning process, which explicitly distinguishes the well-aligned (domain-invariant) and poorly-aligned (domain-specific) features, and then guides the model to focus on the latter. The self-training strategy adaptively improves the decision boundary of the model for the target domain, which implicitly facilitates the extraction of domain-invariant features. By combining the two strategies, we find a more effective way to reduce the domain shift. Extensive experiments demonstrate the effectiveness of the proposed method on numerous benchmark datasets.

Downloads

Published

2021-05-18

How to Cite

Yu, F., Zhang, M., Dong, H., Hu, S., Dong, B., & Zhang, L. (2021). DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10754-10762. https://doi.org/10.1609/aaai.v35i12.17285

Issue

Section

AAAI Technical Track on Machine Learning V