StepFun-Formalizer: Unlocking the Autoformalization Potential of LLMs Through Knowledge-Reasoning Fusion

Authors

  • Yutong Wu State Key Lab of Processors, Institute of Computing Technology, CAS University of Chinese Academy of Sciences
  • Di Huang State Key Lab of Processors, Institute of Computing Technology, CAS
  • Ruosi Wan StepFun Inc.
  • Yue Peng StepFun Inc.
  • Shijie Shang StepFun Inc.
  • Chenrui Cao State Key Lab of Processors, Institute of Computing Technology, CAS University of Science and Technology of China
  • Lei Qi State Key Lab of Processors, Institute of Computing Technology, CAS University of Science and Technology of China
  • Rui Zhang State Key Lab of Processors, Institute of Computing Technology, CAS
  • Xishan Zhang State Key Lab of Processors, Institute of Computing Technology, CAS
  • Zidong Du State Key Lab of Processors, Institute of Computing Technology, CAS
  • Jie Yan StepFun Inc.
  • Xing Hu State Key Lab of Processors, Institute of Computing Technology, CAS

DOI:

https://doi.org/10.1609/aaai.v40i40.40691

Abstract

Autoformalization aims to translate natural-language mathematical statements into a formal language. While LLMs have accelerated progress in this area, existing methods still suffer from low accuracy. We identify two key abilities for effective autoformalization: comprehensive mastery of formal-language domain knowledge, and reasoning capability of natural language problem understanding and informal-formal alignment. Without the former, a model cannot identify the correct formal objects; without the latter, it struggles to interpret real-world contexts and map them precisely into formal expressions. To address these gaps, we introduce ThinkingF, a data synthesis and training pipeline that improves both abilities. First, we construct two datasets: one by distilling and selecting large-scale examples rich in formal knowledge, and another by generating informal-to-formal reasoning trajectories guided by expert-designed templates. We then apply SFT and RLVR with these datasets to further fuse and refine the two abilities. The resulting 7B and 32B models exhibit both comprehensive formal knowledge and strong informal-to-formal reasoning. Notably, StepFun-Formalizer-32B achieves SOTA BEq@1 scores of 40.5% on FormalMATH-Lite and 26.7% on ProverBench, surpassing all prior general-purpose and specialized models.

Published

2026-03-14

How to Cite

Wu, Y., Huang, D., Wan, R., Peng, Y., Shang, S., Cao, C., … Hu, X. (2026). StepFun-Formalizer: Unlocking the Autoformalization Potential of LLMs Through Knowledge-Reasoning Fusion. Proceedings of the AAAI Conference on Artificial Intelligence, 40(40), 33980–33988. https://doi.org/10.1609/aaai.v40i40.40691

Issue

Section

AAAI Technical Track on Natural Language Processing V