L2V-CoT: Cross-Modal Transfer of Chain-of-Thought Reasoning via Latent Intervention

Authors

  • Yu-Liang Zhan Renmin University of China
  • Xinyu Tang Renmin University of China
  • Han Wan Renmin University of China
  • Jian Li Renmin University of China
  • Jirong Wen Renmin University of China
  • Hao Sun Renmin University of China

DOI:

https://doi.org/10.1609/aaai.v40i15.38228

Abstract

Recently, Chain-of-Thought (CoT) reasoning has significantly enhanced the capabilities of large language models (LLMs), but Vision–Language Models (VLMs) still struggle with multi-step reasoning tasks due to limited multimodal reasoning data. To bridge this gap, researchers have explored methods to transfer CoT reasoning from LLMs to VLMs. However, existing approaches either need high training costs or require architectural alignment. In this paper, we use Linear Artificial Tomography (LAT) to empirically show that LLMs and VLMs share similar low-frequency latent representations of CoT reasoning despite architectural differences. Based on this insight, we propose L2V-CoT, a novel training-free latent intervention approach that transfers CoT reasoning from LLMs to VLMs. L2V-CoT extracts and resamples low-frequency CoT representations from LLMs in the frequency domain, enabling dimension matching and latent injection into VLMs during inference to enhance reasoning capabilities. Extensive experiments demonstrate that our approach consistently outperforms training-free baselines and even surpasses supervised methods.

Downloads

Published

2026-03-14

How to Cite

Zhan, Y.-L., Tang, X., Wan, H., Li, J., Wen, J., & Sun, H. (2026). L2V-CoT: Cross-Modal Transfer of Chain-of-Thought Reasoning via Latent Intervention. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 12358–12366. https://doi.org/10.1609/aaai.v40i15.38228

Issue

Section

AAAI Technical Track on Computer Vision XII