VHM: Versatile and Honest Vision Language Model for Remote Sensing Image Analysis
DOI:
https://doi.org/10.1609/aaai.v39i6.32683Abstract
This paper develops a Versatile and Honest vision language Model (VHM) for remote sensing image analysis. VHM is built on a large-scale remote sensing image-text dataset with rich-content captions (VersaD), and an honest instruction dataset comprising both factual and deceptive questions (HnstD). Unlike prevailing remote sensing image-text datasets, in which image captions focus on a few prominent objects and their relationships, VersaD captions provide detailed information about image properties, object attributes, and the overall scene. This comprehensive captioning enables VHM to thoroughly understand remote sensing images and perform diverse remote sensing tasks. Moreover, different from existing remote sensing instruction datasets that only include factual questions, HnstD contains additional deceptive questions stemming from the non-existence of objects. This feature prevents VHM from producing affirmative answers to nonsense queries, thereby ensuring its honesty. In our experiments, VHM significantly outperforms various vision language models on common tasks of scene classification, visual question answering, and visual grounding. Additionally, VHM achieves competent performance on several unexplored tasks, such as building vectorizing, multi-label classification and honest question answering.Published
2025-04-11
How to Cite
Pang, C., Weng, X., Wu, J., Li, J., Liu, Y., Sun, J., … He, C. (2025). VHM: Versatile and Honest Vision Language Model for Remote Sensing Image Analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 6381–6388. https://doi.org/10.1609/aaai.v39i6.32683
Issue
Section
AAAI Technical Track on Computer Vision V