Delving into Multimodal Prompting for Fine-Grained Visual Classification

Authors

  • Xin Jiang Nanjing University of Science and Technology
  • Hao Tang Nanjing University of Science and Technology
  • Junyao Gao Tongji University
  • Xiaoyu Du Nanjing University Of Science And Technology
  • Shengfeng He Singapore Management University
  • Zechao Li Nanjing University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v38i3.28034

Keywords:

CV: Object Detection & Categorization, CV: Image and Video Retrieval, CV: Multi-modal Vision, CV: Representation Learning for Vision, ML: Multimodal Learning, ML: Representation Learning

Abstract

Fine-grained visual classification (FGVC) involves categorizing fine subdivisions within a broader category, which poses challenges due to subtle inter-class discrepancies and large intra-class variations. However, prevailing approaches primarily focus on uni-modal visual concepts. Recent advancements in pre-trained vision-language models have demonstrated remarkable performance in various high-level vision tasks, yet the applicability of such models to FGVC tasks remains uncertain. In this paper, we aim to fully exploit the capabilities of cross-modal description to tackle FGVC tasks and propose a novel multimodal prompting solution, denoted as MP-FGVC, based on the contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a multimodal prompts scheme and a multimodal adaptation scheme. The former includes Subcategory-specific Vision Prompt (SsVP) and Discrepancy-aware Text Prompt (DaTP), which explicitly highlights the subcategory-specific discrepancies from the perspectives of both vision and language. The latter aligns the vision and text prompting elements in a common semantic space, facilitating cross-modal collaborative reasoning through a Vision-Language Fusion Module (VLFM) for further improvement on FGVC. Moreover, we tailor a two-stage optimization strategy for MP-FGVC to fully leverage the pre-trained CLIP model and expedite efficient adaptation for FGVC. Extensive experiments conducted on four FGVC datasets demonstrate the effectiveness of our MP-FGVC.

Published

2024-03-24

How to Cite

Jiang, X., Tang, H., Gao, J., Du, X., He, S., & Li, Z. (2024). Delving into Multimodal Prompting for Fine-Grained Visual Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2570-2578. https://doi.org/10.1609/aaai.v38i3.28034

Issue

Section

AAAI Technical Track on Computer Vision II