Test-Time Domain Adaptation by Learning Domain-Aware Batch Normalization

Authors

  • Yanan Wu Key Laboratory of Big Data & Artificial Intelligence in Transportation, Ministry of Education, Beijing Jiaotong University, Beijing, 100044, China School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
  • Zhixiang Chi The Edward S Rogers Sr. ECE Department, University of Toronto, Toronto, M5S3G8, Canada
  • Yang Wang Department of Computer Science and Software Engineering, Concordia University, Montreal, H3G2J1, Canada
  • Konstantinos N. Plataniotis The Edward S Rogers Sr. ECE Department, University of Toronto, Toronto, M5S3G8, Canada
  • Songhe Feng Key Laboratory of Big Data & Artificial Intelligence in Transportation, Ministry of Education, Beijing Jiaotong University, Beijing, 100044, China School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China

DOI:

https://doi.org/10.1609/aaai.v38i14.29527

Keywords:

ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Representation Learning for Vision

Abstract

Test-time domain adaptation aims to adapt the model trained on source domains to unseen target domains using a few unlabeled images. Emerging research has shown that the label and domain information is separately embedded in the weight matrix and batch normalization (BN) layer. Previous works normally update the whole network naively without explicitly decoupling the knowledge between label and domain. As a result, it leads to knowledge interference and defective distribution adaptation. In this work, we propose to reduce such learning interference and elevate the domain knowledge learning by only manipulating the BN layer. However, the normalization step in BN is intrinsically unstable when the statistics are re-estimated from a few samples. We find that ambiguities can be greatly reduced when only updating the two affine parameters in BN while keeping the source domain statistics. To further enhance the domain knowledge extraction from unlabeled data, we construct an auxiliary branch with label-independent self-supervised learning (SSL) to provide supervision. Moreover, we propose a bi-level optimization based on meta-learning to enforce the alignment of two learning objectives of auxiliary and main branches. The goal is to use the auxiliary branch to adapt the domain and benefit main task for subsequent inference. Our method keeps the same computational cost at inference as the auxiliary branch can be thoroughly discarded after adaptation. Extensive experiments show that our method outperforms the prior works on five WILDS real-world domain shift datasets. Our method can also be integrated with methods with label-dependent optimization to further push the performance boundary. Our code is available at https://github.com/ynanwu/MABN.

Downloads

Published

2024-03-24

How to Cite

Wu, Y., Chi, Z., Wang, Y., Plataniotis, K. N., & Feng, S. (2024). Test-Time Domain Adaptation by Learning Domain-Aware Batch Normalization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15961-15969. https://doi.org/10.1609/aaai.v38i14.29527

Issue

Section

AAAI Technical Track on Machine Learning V