TG-LLaVA: Text Guided LLaVA via Learnable Latent Embeddings

Authors

  • Dawei Yan School of Cybersecurity, Northwestern Polytechnical University AI Business, Alibaba Group
  • Pengcheng Li AI Business, Alibaba Group
  • Yang Li AI Business, Alibaba Group
  • Hao Chen College of Computer Science and Technology, Zhejiang University
  • Qingguo Chen AI Business, Alibaba Group
  • Weihua Luo AI Business, Alibaba Group
  • Wei Dong College of Information and Control Engineering, Xi’an University of Architecture and Technology
  • Qingsen Yan School of Computer Science, Northwestern Polytechnical University
  • Haokui Zhang School of Cybersecurity, Northwestern Polytechnical University
  • Chunhua Shen College of Computer Science and Technology, Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v39i9.32982

Abstract

Currently, inspired by the success of vision-language models (VLMs), an increasing number of researchers are focusing on improving VLMs and have achieved promising results. However, most existing methods concentrate on optimizing the connector and enhancing the language model component, while neglecting improvements to the vision encoder itself. In contrast, we propose Text Guided LLaVA (TG-LLaVA) in this paper, which optimizes VLMs by guiding the vision encoder with text, offering a new and orthogonal optimization direction. Specifically, inspired by the purpose-driven logic inherent in human behavior, we use learnable latent embeddings as a bridge to analyze textual instruction and add the analysis results to the vision encoder as guidance, refining it. Subsequently, another set of latent embeddings extracts additional detailed text-guided information from high-resolution local patches as auxiliary information. Finally, with the guidance of text, the vision encoder can extract text-related features, similar to how humans focus on the most relevant parts of an image when considering a question. This results in generating better answers. Experiments on various datasets validate the effectiveness of the proposed method. Remarkably, without the need for additional training data, our proposed method can bring more benefits to the baseline (LLaVA-1.5) compared with other concurrent methods. Furthermore, the proposed method consistently brings improvement in different settings.

Downloads

Published

2025-04-11

How to Cite

Yan, D., Li, P., Li, Y., Chen, H., Chen, Q., Luo, W., … Shen, C. (2025). TG-LLaVA: Text Guided LLaVA via Learnable Latent Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 39(9), 9076–9084. https://doi.org/10.1609/aaai.v39i9.32982

Issue

Section

AAAI Technical Track on Computer Vision VIII