Gradient Regularized Contrastive Learning for Continual Domain Adaptation

Authors

  • Shixiang Tang The University of Sydney Sensetime Group Limited
  • Peng Su The Chinese University of Hong Kong
  • Dapeng Chen Sensetime Group Limited
  • Wanli Ouyang The University of Sydney

Keywords:

Object Detection & Categorization

Abstract

Human beings can quickly adapt to environmental changes by leveraging learning experience. However, adapting deep neural networks to dynamic environments by machine learning algorithms remains a challenge. To better understand this issue, we study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains. The obstacles in this problem are both domain shift and catastrophic forgetting. We propose Gradient Regularized Contrastive Learning (GRCL) to solve the obstacles. At the core of our method, gradient regularization plays two key roles: (1) enforcing the gradient not to harm the discriminative ability of source features which can, in turn, benefit the adaptation ability of the model to target domains; (2) constraining the gradient not to increase the classification loss on old target domains, which enables the model to preserve the performance on old target domains when adapting to an in-coming target domain. Experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach when compared to the state-of-the-art.

Downloads

Published

2021-05-18

How to Cite

Tang, S., Su, P., Chen, D., & Ouyang, W. (2021). Gradient Regularized Contrastive Learning for Continual Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2665-2673. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16370

Issue

Section

AAAI Technical Track on Computer Vision II