Don’t Start Over: A Cost-Effective Framework for Migrating Personalized Prompts Between LLMs
DOI:
https://doi.org/10.1609/aaai.v40i41.40805Abstract
Personalization in Large Language Models (LLMs) often relies on user-specific soft prompts. However, these prompts become obsolete when the foundation model is upgraded, necessitating costly, full-scale retraining. To overcome this limitation, we propose the Prompt-level User Migration Adapter (PUMA), a lightweight framework to efficiently migrate personalized prompts across incompatible models. PUMA utilizes a parameter-efficient adapter to bridge the semantic gap, combined with a group-based user selection strategy to significantly reduce training costs. Experiments on three large-scale datasets show our method matches or even surpasses the performance of retraining from scratch, reducing computational cost by up to 98%. The framework demonstrates strong generalization across diverse model architectures and robustness in advanced scenarios like chained and aggregated migrations, offering a practical path for the sustainable evolution of personalized AI by decoupling user assets from the underlying models.Downloads
Published
2026-03-14
How to Cite
Zhao, Z., Gao, C., Zhang, Y., Liu, H., Gan, W., Guo, H., … Feng, F. (2026). Don’t Start Over: A Cost-Effective Framework for Migrating Personalized Prompts Between LLMs. Proceedings of the AAAI Conference on Artificial Intelligence, 40(41), 35003–35011. https://doi.org/10.1609/aaai.v40i41.40805
Issue
Section
AAAI Technical Track on Natural Language Processing VI