Domain Conditioned Adaptation Network

Authors

  • Shuang Li Beijing Institute of Technology
  • Chi Liu Beijing Institute of Technology
  • Qiuxia Lin Beijing Institute of Technology
  • Binhui Xie Beijing Institute of Technology
  • Zhengming Ding Indiana University-Purdue University Indianapolis
  • Gao Huang Tsinghua University
  • Jian Tang Syracuse University

DOI:

https://doi.org/10.1609/aaai.v34i07.6801

Abstract

Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three cross-domain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.

Downloads

Published

2020-04-03

How to Cite

Li, S., Liu, C., Lin, Q., Xie, B., Ding, Z., Huang, G., & Tang, J. (2020). Domain Conditioned Adaptation Network. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11386-11393. https://doi.org/10.1609/aaai.v34i07.6801

Issue

Section

AAAI Technical Track: Vision