Diffusion-calibrated Continual Test-time Adaptation
DOI:
https://doi.org/10.1609/aaai.v40i33.39989Abstract
Continual Test-Time Domain Adaptation (CTTA) aims to adapt a pre-trained source model to a dynamically evolving target domain without requiring additional data collection or labeling efforts. A key challenge in this setting is to achieve rapid performance improvement in the current domain using unlabeled data, while avoiding impairing generalization to future domains in complex scenarios. To enhance the discriminative capability of the inference models, we propose a novel framework that integrates an external auxiliary generative model with a test-time adaptive method, leveraging cross-validation to identify reliable supervisory signals. Specifically, for each test instance, we utilize a diffusion module to generate a calibrated instance under the textual description of its predicted category. Based on the generated one, we design a learning strategy with the following components: (1) the calibrated instance and its category are used to form a supervisory signal; (2) the predicted category of the calibrated instance is compared with the test instance for selecting reliable signals. For these generated and selected instances, adaptive weighting is applied during optimization to stabilize the category distribution and preserve prediction diversity. Finally, based on the inverse process of diffusion, we construct the negative instance of the generated instance and introduce a robust contrastive learning to further calibrate model optimization. Extensive experiments demonstrate that our method achieves state-of-the-art performance across multiple benchmarks. Ablation studies further validate the effectiveness of each proposed component.Downloads
Published
2026-03-14
How to Cite
Yang, X., Li, M., & Wei, K. (2026). Diffusion-calibrated Continual Test-time Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 27684-27692. https://doi.org/10.1609/aaai.v40i33.39989
Issue
Section
AAAI Technical Track on Machine Learning X