Compound Text-Guided Prompt Tuning via Image-Adaptive Cues

Authors

  • Hao Tan MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
  • Jun Li MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
  • Yizhuang Zhou Megvii Technology
  • Jun Wan MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
  • Zhen Lei MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China CAIR, HKISI, Chinese Academy of Sciences, Hong Kong, China
  • Xiangyu Zhang Megvii Technology

DOI:

https://doi.org/10.1609/aaai.v38i5.28311

Keywords:

CV: Language and Vision, CV: Multi-modal Vision, ML: Multimodal Learning

Abstract

Vision-Language Models (VLMs) such as CLIP have demonstrated remarkable generalization capabilities to downstream tasks. However, existing prompt tuning based frameworks need to parallelize learnable textual inputs for all categories, suffering from massive GPU memory consumption when there is a large number of categories in the target dataset. Moreover, previous works require to include category names within prompts, exhibiting subpar performance when dealing with ambiguous category names. To address these shortcomings, we propose Compound Text-Guided Prompt Tuning (TGP-T) that significantly reduces resource demand while achieving superior performance. We introduce text supervision to the optimization of prompts, which enables two benefits: 1) releasing the model reliance on the pre-defined category names during inference, thereby enabling more flexible prompt generation; 2) reducing the number of inputs to the text encoder, which decreases GPU memory consumption significantly. Specifically, we found that compound text supervisions, i.e., category-wise and content-wise, is highly effective, since they provide inter-class separability and capture intra-class variations, respectively. Moreover, we condition the prompt generation on visual features through a module called Bonder, which facilitates the alignment between prompts and visual features. Extensive experiments on few-shot recognition and domain generalization demonstrate that TGP-T achieves superior performance with consistently lower training costs. It reduces GPU memory usage by 93% and attains a 2.5% performance gain on 16-shot ImageNet. The code is available at https://github.com/EricTan7/TGP-T.

Published

2024-03-24

How to Cite

Tan, H., Li, J., Zhou, Y., Wan, J., Lei, Z., & Zhang, X. (2024). Compound Text-Guided Prompt Tuning via Image-Adaptive Cues. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 5061-5069. https://doi.org/10.1609/aaai.v38i5.28311

Issue

Section

AAAI Technical Track on Computer Vision IV