MMhops-R1: Multimodal Multi-hop Reasoning

Authors

  • Tao Zhang Institute of Automation,Chinese Academy of Sciences School of Artificial Intelligence, University of Chinese Academy of Sciences Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information Tencent Inc.
  • Ziqi Zhang Institute of Automation,Chinese Academy of Sciences Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information
  • Zongyang Ma Institute of Automation,Chinese Academy of Sciences Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information
  • Yuxin Chen Tencent Inc.
  • Bing Li Institute of Automation,Chinese Academy of Sciences Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information PeopleAl Inc.
  • Chunfeng Yuan Institute of Automation,Chinese Academy of Sciences Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information
  • Guangting Wang Tencent Inc.
  • Fengyun Rao Tencent Inc.
  • Ying Shan Tencent Inc.
  • Weiming Hu Institute of Automation,Chinese Academy of Sciences School of Artificial Intelligence, University of Chinese Academy of Sciences Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information School of Information Science and Technology, ShanghaiTech University

DOI:

https://doi.org/10.1609/aaai.v40i33.40068

Abstract

The ability to perform multi-modal multi-hop reasoning by iteratively integrating information across various modalities and external knowledge is critical for addressing complex real-world challenges. However, existing Multi-modal Large Language Models (MLLMs) are predominantly limited to single-step reasoning, as existing benchmarks lack the complexity needed to evaluate and drive multi-hop abilities. To bridge this gap, we introduce MMhops, a novel, large-scale benchmark designed to systematically evaluate and foster multi-modal multi-hop reasoning. MMhops dataset comprises two challenging task formats, Bridging and Comparison, which necessitate that models dynamically construct complex reasoning chains by integrating external knowledge. To tackle the challenges posed by MMhops, we propose MMhops-R1, a novel multi-modal Retrieval-Augmented Generation (mRAG) framework for dynamic reasoning. Our framework utilizes reinforcement learning to optimize the model for autonomously planning reasoning paths, formulating targeted queries, and synthesizing multi-level information. Comprehensive experiments demonstrate that MMhops-R1 significantly outperforms strong baselines on MMhops, highlighting that dynamic planning and multi-modal knowledge integration are crucial for complex reasoning. Moreover, MMhops-R1 demonstrates strong generalization to tasks requiring fixed-hop reasoning, underscoring the robustness of our dynamic planning approach.

Downloads

Published

2026-03-14

How to Cite

Zhang, T., Zhang, Z., Ma, Z., Chen, Y., Li, B., Yuan, C., … Hu, W. (2026). MMhops-R1: Multimodal Multi-hop Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 28391–28399. https://doi.org/10.1609/aaai.v40i33.40068

Issue

Section

AAAI Technical Track on Machine Learning X