Compensating Distribution Drifts in Continual Learning with Pre-trained Vision Transformers

Authors

  • Xuan Rao School of Systems Science, Beijing Normal University, China
  • Simian Xu School of Physics, Peking University, China
  • Zheng Li School of Automation and Intelligent Manufacturing, Southern University of Science and Technology, China
  • Bo Zhao School of Systems Science, Beijing Normal University, China
  • Derong Liu School of Artificial Intelligence, Anhui University, China
  • Mingming Ha Kuaishou Technology
  • Cesare Alippi Politecnico di Milano, Italy Università della Svizzera Italiana, Switzerland

DOI:

https://doi.org/10.1609/aaai.v40i30.39698

Abstract

Recent advances have shown that sequential fine-tuning (SeqFT) of pre-trained vision transformers (ViTs), followed by classifier refinement using approximate distributions of class features, can be an effective strategy for class-incremental learning (CIL). However, this approach is susceptible to distribution drift, caused by the sequential optimization of shared backbone parameters. This results in a mismatch between the distributions of the previously learned classes and that of the updated model, ultimately degrading the effectiveness of classifier performance over time. To address this issue, we introduce a latent space transition operator and propose Sequential Learning with Drift Compensation (SLDC). SLDC aims to align feature distributions across tasks to mitigate the impact of drift. First, we present a linear variant of SLDC, which learns a linear operator by solving a regularized least-squares problem that maps features before and after fine-tuning. Next, we extend this with a weakly nonlinear SLDC variant, which assumes that the ideal transition operator lies between purely linear and fully nonlinear transformations. This is implemented using learnable, weakly nonlinear mappings that balance flexibility and generalization. To further reduce representation drift, we apply knowledge distillation (KD) in both algorithmic variants. Extensive experiments on standard CIL benchmarks demonstrate that SLDC significantly improves the performance of SeqFT. Notably, by combining KD to address representation drift with SLDC to compensate distribution drift, SeqFT achieves performance comparable to joint training across all evaluated datasets.

Downloads

Published

2026-03-14

How to Cite

Rao, X., Xu, S., Li, Z., Zhao, B., Liu, D., Ha, M., & Alippi, C. (2026). Compensating Distribution Drifts in Continual Learning with Pre-trained Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25090–25098. https://doi.org/10.1609/aaai.v40i30.39698

Issue

Section

AAAI Technical Track on Machine Learning VII