CrossCheck-Bench: Diagnosing Compositional Failures in Multimodal Conflict Resolution

Authors

  • Baoliang Tian ByteDance
  • Yuxuan Si Zhejiang University ByteDance
  • Jilong Wang Institute of Automation of the Chinese Academy of Sciences ByteDance
  • LingYao Li ByteDance
  • Zhongyuan Bao ByteDance
  • Zineng Zhou ByteDance
  • Tao Wang ByteDance
  • Sixu Li ByteDance
  • Ziyao Xu ByteDance
  • Mingze Wang ByteDance
  • Zhouzhuo Zhang ByteDance
  • Zhihao Wang ByteDance
  • Yi Ke Yun ByteDance
  • Ke Tian ByteDance
  • Ning Yang Institute of Automation of the Chinese Academy of Sciences
  • Minghui Qiu ByteDance

DOI:

https://doi.org/10.1609/aaai.v40i31.39788

Abstract

Multimodal Large Language Models are primarily trained and evaluated on aligned image-text pairs, which leaves their ability to detect and resolve real-world inconsistencies largely unexplored. In open-domain applications visual and textual cues often conflict, requiring models to perform structured reasoning beyond surface-level alignment. We introduce CrossCheck-Bench, a diagnostic benchmark for evaluating contradiction detection in multimodal inputs. The benchmark adopts a hierarchical task framework covering three levels of reasoning complexity and defines seven atomic capabilities essential for resolving cross-modal inconsistencies. CrossCheck-Bench includes 15k question-answer pairs sourced from real-world artifacts with synthetically injected contradictions. The dataset is constructed through a multi-stage annotation pipeline involving more than 450 expert hours to ensure semantic validity and calibrated difficulty across perception, integration, and reasoning. We evaluate 13 state-of-the-art vision-language models and observe a consistent performance drop as tasks shift from perceptual matching to logical contradiction detection. Most models perform well on isolated entity recognition but fail when multiple clues must be synthesized for conflict reasoning. Capability-level analysis further reveals uneven skill acquisition, especially in tasks requiring multi-step inference or rule-based validation. Additional probing shows that conventional prompting strategies such as Chain-of-Thought and Set-of-Mark yield only marginal gains. By contrast, methods that interleave symbolic reasoning with grounded visual processing achieve more stable improvements. These results highlight a persistent bottleneck in multimodal reasoning and suggest new directions for building models capable of robust cross-modal verification.

Published

2026-03-14

How to Cite

Tian, B., Si, Y., Wang, J., Li, L., Bao, Z., Zhou, Z., … Qiu, M. (2026). CrossCheck-Bench: Diagnosing Compositional Failures in Multimodal Conflict Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 40(31), 25887–25895. https://doi.org/10.1609/aaai.v40i31.39788

Issue

Section

AAAI Technical Track on Machine Learning VIII