Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
DOI:
https://doi.org/10.1609/aaai.v40i33.40044Abstract
Prompt tuning has emerged as a lightweight strategy for adapting foundation models to downstream tasks, particularly for resource-constrained systems. As pre-trained prompts become valuable assets, combining multiple source prompts offers a promising approach to enhance generalization for new tasks by leveraging complementary knowledge. However, naive aggregation often overlooks different source prompts have different contribution potential to the target task. To address this, we propose HGPrompt, a dynamic framework that learns optimal ensemble weights. These weights are optimized by jointly maximizing an information-theoretic metric for transferability and minimizing gradient conflicts via a novel regularization strategy. Specifically, we propose a differentiable prompt transferability metric to captures the discriminability of prompt-induced features on the target task. Meanwhile, HGPrompt match the gradient variances with respect to different source prompts based on Hessian and Fisher Information, ensuring stable and coherent knowledge transfer while suppressing gradient conflicts among them. Extensive experiments on the large-scale VTAB benchmark demonstrate the state-of-the-art performance of HGPrompt, validating its effectiveness in learning an optimal ensemble for effective multi-source prompt transfer.Downloads
Published
2026-03-14
How to Cite
Zhang, E., Cao, L., Wu, Y., Zijie, Z., & Li, Y. (2026). Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 28176–28184. https://doi.org/10.1609/aaai.v40i33.40044
Issue
Section
AAAI Technical Track on Machine Learning X