CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding Evaluation

Authors

  • Yuxuan Wang Zhejiang Lab
  • Yijun Liu Harbin Institute of Technology
  • Fei Yu Zhejiang Lab
  • Chen Huang Zhejiang Lab
  • Kexin Li Zhejiang Lab
  • Zhiguo Wan Zhejiang Lab
  • Wanxiang Che Harbin Institute of Technology
  • Hongyang Chen Zhejiang Lab

DOI:

https://doi.org/10.1609/aaai.v39i8.32884

Abstract

Despite the rapid development of Chinese vision-language models (VLMs), most existing Chinese vision-language (VL) datasets are constructed on Western-centric images from existing English VL datasets. The cultural bias in the images makes these datasets unsuitable for evaluating VLMs in Chinese culture. To remedy this issue, we present a new Chinese Vision-Language Understanding Evaluation (CVLUE) benchmark dataset, where the selection of object categories and images is entirely driven by Chinese native speakers, ensuring that the source images are representative of Chinese culture. The benchmark contains four distinct VL tasks ranging from image-text retrieval to visual question answering, visual grounding and visual dialogue. We present a detailed statistical analysis of CVLUE and provide a baseline performance analysis with several open-source multilingual VLMs on CVLUE and its English counterparts to reveal their performance gap between English and Chinese. Our in-depth category-level analysis reveals a lack of Chinese cultural knowledge in existing VLMs. We also find that fine-tuning on Chinese culture-related VL datasets effectively enhances VLMs' understanding of Chinese culture.

Published

2025-04-11

How to Cite

Wang, Y., Liu, Y., Yu, F., Huang, C., Li, K., Wan, Z., Che, W., & Chen, H. (2025). CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding Evaluation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8196-8204. https://doi.org/10.1609/aaai.v39i8.32884

Issue

Section

AAAI Technical Track on Computer Vision VII