Make Prompts Adaptable: Bayesian Modeling for Vision-Language Prompt Learning with Data-Dependent Prior

Authors

  • Youngjae Cho KAIST
  • HeeSun Bae KAIST
  • Seungjae Shin KAIST
  • Yeo Dong Youn Seoul National University
  • Weonyoung Joo EWHA Womans University
  • Il-Chul Moon KAIST

DOI:

https://doi.org/10.1609/aaai.v38i10.29037

Keywords:

ML: Multimodal Learning, CV: Language and Vision, CV: Multi-modal Vision, ML: Bayesian Learning

Abstract

Recent vision-language pre-trained (VLP) models have become the backbone for many downstream tasks, but they are utilized as frozen model without learning. Prompt learning is a method to improve the pre-trained VLP model by adding a learnable context vector to the inputs of the text encoder. In a few-shot learning scenario of the downstream task, MLE training can lead the context vector to over-fit dominant image features in the training data. This overfitting can potentially harm the generalization ability, especially in the presence of a distribution shift between the training and test dataset. This paper presents a Bayesian-based framework of prompt tuning, which could alleviate the over-fitting issues on few-shot learning application and increase the adaptability of prompts on unobserved instances. Specifically, modeling data-dependent prior enhances the adaptability of text features for both seen and unseen image features without the trade-off of performance between them. Based on the Bayesian framework, we utilize the Wasserstein gradient flow in the estimation of our target posterior distribution, which enables our prompt to be flexible in capturing the complex modes of image features. We demonstrate the effectiveness of our method on benchmark datasets for several experiments by showing statistically significant improvements on performance compared to existing methods.

Downloads

Published

2024-03-24

How to Cite

Cho, Y., Bae, H., Shin, S., Youn, Y. D., Joo, W., & Moon, I.-C. (2024). Make Prompts Adaptable: Bayesian Modeling for Vision-Language Prompt Learning with Data-Dependent Prior. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11552-11560. https://doi.org/10.1609/aaai.v38i10.29037

Issue

Section

AAAI Technical Track on Machine Learning I