Unified Representation Causal Prompt Distillation for Re-Inference-Free Lifelong Person Re-Identification

Authors

  • Jiaqi Zhao School of Computer Science and Technology / School of Artificial Intelligence, China University of Mining Technology, Xuzhou, China Mine Digitization Engineering Research Center of the Ministry of Education, Xuzhou, China
  • Jie Luo School of Computer Science and Technology / School of Artificial Intelligence, China University of Mining Technology, Xuzhou, China Mine Digitization Engineering Research Center of the Ministry of Education, Xuzhou, China
  • Yong Zhou School of Computer Science and Technology / School of Artificial Intelligence, China University of Mining Technology, Xuzhou, China Mine Digitization Engineering Research Center of the Ministry of Education, Xuzhou, China
  • Wen-Liang Du School of Computer Science and Technology / School of Artificial Intelligence, China University of Mining Technology, Xuzhou, China Mine Digitization Engineering Research Center of the Ministry of Education, Xuzhou, China
  • Xixi Li School of Computer Science and Technology / School of Artificial Intelligence, China University of Mining Technology, Xuzhou, China Mine Digitization Engineering Research Center of the Ministry of Education, Xuzhou, China
  • Rui Yao School of Computer Science and Technology / School of Artificial Intelligence, China University of Mining Technology, Xuzhou, China Mine Digitization Engineering Research Center of the Ministry of Education, Xuzhou, China

DOI:

https://doi.org/10.1609/aaai.v40i15.38315

Abstract

Lifelong person re-identification (LReID) aims to retrieve the target person from sequentially collected data. Due to significant domain gaps between datasets and the continuous increase of training data from different scenarios, weak inter-domain generalization and catastrophic forgetting issues have remained major challenges for LReID. To tackle these issues, a novel LReID method called Unified Representation Causal Prompt Distillation (URCPD) is proposed. Specifically, to reduce domain gaps among different scene datasets and improve model inter-domain generalization capability, a Feature Decoupling Style Transfer module (FDST) is proposed to map new features into a unified feature space. Furthermore, to reduce the accumulated forgetting of old knowledge during the training stage, a Causal Prompt Distillation module (CPD) is introduced. This module eliminates the re-inference process for distillation and embeds memory prompts to combat catastrophic forgetting. Extensive experiments on five classic LReID seen datasets and seven unseen datasets demonstrate that our method significantly outperforms state-of-the-art methods.

Published

2026-03-14

How to Cite

Zhao, J., Luo, J., Zhou, Y., Du, W.-L., Li, X., & Yao, R. (2026). Unified Representation Causal Prompt Distillation for Re-Inference-Free Lifelong Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 13144–13152. https://doi.org/10.1609/aaai.v40i15.38315

Issue

Section

AAAI Technical Track on Computer Vision XII