VHM: Versatile and Honest Vision Language Model for Remote Sensing Image Analysis

Authors

  • Chao Pang School of Artificial Intelligence, Wuhan University School of Computer Science, Wuhan University
  • Xingxing Weng School of Computer Science, Wuhan University
  • Jiang Wu Shanghai Artificial Intelligence Laboratory
  • Jiayu Li School of Computer Science, Wuhan University
  • Yi Liu School of Computer Science, Wuhan University
  • Jiaxing Sun Shanghai Artificial Intelligence Laboratory State Key Lab. of LIESMARS, Wuhan University
  • Weijia Li Sun Yat-Sen University
  • Shuai Wang SenseTime Research
  • Litong Feng SenseTime Research
  • Gui-Song Xia School of Artificial Intelligence, Wuhan University School of Computer Science, Wuhan University State Key Lab. of LIESMARS, Wuhan University Institue for Math & AI, Wuhan University
  • Conghui He Shanghai Artificial Intelligence Laboratory SenseTime Research

DOI:

https://doi.org/10.1609/aaai.v39i6.32683

Abstract

This paper develops a Versatile and Honest vision language Model (VHM) for remote sensing image analysis. VHM is built on a large-scale remote sensing image-text dataset with rich-content captions (VersaD), and an honest instruction dataset comprising both factual and deceptive questions (HnstD). Unlike prevailing remote sensing image-text datasets, in which image captions focus on a few prominent objects and their relationships, VersaD captions provide detailed information about image properties, object attributes, and the overall scene. This comprehensive captioning enables VHM to thoroughly understand remote sensing images and perform diverse remote sensing tasks. Moreover, different from existing remote sensing instruction datasets that only include factual questions, HnstD contains additional deceptive questions stemming from the non-existence of objects. This feature prevents VHM from producing affirmative answers to nonsense queries, thereby ensuring its honesty. In our experiments, VHM significantly outperforms various vision language models on common tasks of scene classification, visual question answering, and visual grounding. Additionally, VHM achieves competent performance on several unexplored tasks, such as building vectorizing, multi-label classification and honest question answering.

Downloads

Published

2025-04-11

How to Cite

Pang, C., Weng, X., Wu, J., Li, J., Liu, Y., Sun, J., … He, C. (2025). VHM: Versatile and Honest Vision Language Model for Remote Sensing Image Analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 6381–6388. https://doi.org/10.1609/aaai.v39i6.32683

Issue

Section

AAAI Technical Track on Computer Vision V