AdaMCoT: Rethinking Cross-Lingual Factual Reasoning Through Adaptive Multilingual Chain-of-Thought

Authors

  • Zheng Weihua Agency for Science, Technology and Research (A*STAR), Singapore Singapore University of Technology and Design
  • Xin Huang Agency for Science, Technology and Research (A*STAR), Singapore
  • Zhengyuan Liu Agency for Science, Technology and Research (A*STAR), Singapore
  • Tarun Kumar Vangani Agency for Science, Technology and Research (A*STAR), Singapore
  • Bowei Zou Agency for Science, Technology and Research (A*STAR), Singapore
  • Xiyan Tao Agency for Science, Technology and Research (A*STAR), Singapore
  • Yuhao Wu Singapore University of Technology and Design
  • AiTi Aw Agency for Science, Technology and Research (A*STAR), Singapore
  • Nancy F. Chen Agency for Science, Technology and Research (A*STAR), Singapore
  • Roy Ka-Wei Lee Singapore University of Technology and Design

DOI:

https://doi.org/10.1609/aaai.v40i40.40678

Abstract

Large language models (LLMs) have shown impressive multilingual capabilities through pretraining on diverse corpora. While these models show strong reasoning abilities, their performance varies significantly across languages due to imbalanced training data distribution. Existing approaches using sample-level translation for extensive multilingual pretraining and cross-lingual tuning face scalability challenges and often fail to capture nuanced reasoning processes across languages. In this paper, we introduce **AdaMCoT** (Adaptive Multilingual Chain-of-Thought), a framework that enhances multilingual factual reasoning by dynamically routing thought processes in intermediary “thinking languages” before generating target-language responses. AdaMCoT leverages a language-agnostic core and incorporates an adaptive, reward-based mechanism for selecting optimal reasoning pathways without requiring additional pretraining. Our comprehensive evaluation across multiple benchmarks demonstrates substantial improvements in both factual reasoning quality and cross-lingual consistency, with particularly strong performance gains in low-resource language settings. An in-depth analysis of the model’s hidden states and semantic space further elucidates the underlying mechanism of our method. The results suggest that adaptive reasoning paths can effectively bridge the performance gap between high- and low-resource languages while maintaining cultural and linguistic nuances.

Published

2026-03-14

How to Cite

Weihua, Z., Huang, X., Liu, Z., Vangani, T. K., Zou, B., Tao, X., … Lee, R. K.-W. (2026). AdaMCoT: Rethinking Cross-Lingual Factual Reasoning Through Adaptive Multilingual Chain-of-Thought. Proceedings of the AAAI Conference on Artificial Intelligence, 40(40), 33863–33871. https://doi.org/10.1609/aaai.v40i40.40678

Issue

Section

AAAI Technical Track on Natural Language Processing V