MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios

Authors

  • Jiacheng Ruan Shanghai Jiao Tong University, Shanghai, China
  • Wenzhen Yuan Shanghai Jiao Tong University, Shanghai, China
  • Zehao Lin Institute for Advanced Algorithms Research, Shanghai, China
  • Ning Liao Institute for Advanced Algorithms Research, Shanghai, China
  • Zhiyu Li Institute for Advanced Algorithms Research, Shanghai, China
  • Feiyu Xiong Institute for Advanced Algorithms Research, Shanghai, China
  • Ting Liu Shanghai Jiao Tong University, Shanghai, China
  • Yuzhuo Fu Shanghai Jiao Tong University, Shanghai, China

DOI:

https://doi.org/10.1609/aaai.v39i7.32723

Abstract

Large visual-language models (LVLMs) have achieved great success in multiple applications. However, they still encounter challenges in complex scenes, especially those involving camouflaged objects. This is primarily due to the lack of samples related to camouflaged scenes in the training dataset. To mitigate this issue, we construct the MM-CamObj dataset for the first time, comprising two subsets: CamObj-Align and CamObj-Instruct. Specifically, CamObj-Align contains 11,363 image-text pairs, and it is designed for VL alignment and injecting rich knowledge of camouflaged scenes into LVLMs. CamObj-Instruct is collected for fine-tuning the LVLMs with improved instruction-following capabilities, and it includes 11,363 images and 68,849 conversations with diverse instructions. Based on the MM-CamObj dataset, we propose the CamObj-Llava, an LVLM specifically designed for addressing tasks in camouflaged scenes. To facilitate our model's effective acquisition of knowledge about camouflaged objects and scenes, we introduce a curriculum learning strategy with six distinct modes. Additionally, we construct the CamObj-Bench to evaluate the existing LVLMs' capabilities of understanding, recognition, localization and count in camouflage scenes. This benchmark includes 600 images and 7 tasks, with a total of 9,449 questions. Extensive experiments are conducted on the CamObj-Bench with CamObj-Llava, 8 existing open-source and 3 closed-source LVLMs. Surprisingly, the results indicate that our model achieves a 25.84% improvement in 4 out of 7 tasks compared to GPT-4o.

Downloads

Published

2025-04-11

How to Cite

Ruan, J., Yuan, W., Lin, Z., Liao, N., Li, Z., Xiong, F., … Fu, Y. (2025). MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 6740–6748. https://doi.org/10.1609/aaai.v39i7.32723

Issue

Section

AAAI Technical Track on Computer Vision VI