Unveiling the Impact of Coding Data Instruction Fine-Tuning on Large Language Models Reasoning

Authors

  • Xinlu Zhang University of California, Santa Barbara
  • Zhiyu Zoey Chen The University of Texas at Dallas
  • Xi Ye The University of Texas at Austin
  • Xianjun Yang University of California, Santa Barbara
  • Lichang Chen University of Maryland, College Park
  • William Yang Wang University of California, Santa Barbara
  • Linda Ruth Petzold University of California, Santa Barbara

DOI:

https://doi.org/10.1609/aaai.v39i24.34789

Abstract

Instruction Fine-Tuning (IFT) significantly enhances the zero-shot capabilities of pretrained Large Language Models (LLMs). While coding data is known to boost LLM reasoning abilities during pretraining, its role in activating internal reasoning capacities during IFT remains understudied. This paper investigates a key question: How does coding data impact LLMs' reasoning capacities during IFT stage? To explore this, we thoroughly examine the impact of coding data across different coding data proportions, model families, sizes, and reasoning domains, from various perspectives. Specifically, we create three IFT datasets with increasing coding data proportions, fine-tune six LLM backbones across different families and scales on these datasets, evaluate the tuned models' performance across twelve tasks in three reasoning domains, and analyze the outcomes from three broad-to-granular perspectives: overall, domain-level, and task-specific. Our holistic analysis provides valuable insights into each perspective. First, coding data tuning enhances the overall reasoning capabilities of LLMs across different model families and scales. Moreover, while the impact of coding data varies by domain, it shows consistent trends within each domain across different model families and scales. Additionally, coding data generally provides comparable task-specific benefits across model families, with optimal proportions in IFT datasets being task-dependent.

Published

2025-04-11

How to Cite

Zhang, X., Chen, Z. Z., Ye, X., Yang, X., Chen, L., Wang, W. Y., & Petzold, L. R. (2025). Unveiling the Impact of Coding Data Instruction Fine-Tuning on Large Language Models Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25949-25957. https://doi.org/10.1609/aaai.v39i24.34789

Issue

Section

AAAI Technical Track on Natural Language Processing III