QiMeng-Kernel: Macro-Thinking Micro-Coding Paradigm for LLM-Based High-Performance GPU Kernel Generation

Authors

  • Xinguo Zhu Intelligent Software Research Center, Institute of Software, CAS, Beijing, China Hangzhou Institute for Advanced Study, UCAS, Hangzhou, China University of Chinese Academy of Sciences, Beijing, China
  • Shaohui Peng Intelligent Software Research Center, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Jiaming Guo State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, China
  • Yunji Chen State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Qi Guo State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, China
  • Yuanbo Wen State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, China
  • Hang Qin Intelligent Software Research Center, Institute of Software, CAS, Beijing, China Hangzhou Institute for Advanced Study, UCAS, Hangzhou, China University of Chinese Academy of Sciences, Beijing, China
  • Ruizhi Chen Intelligent Software Research Center, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Qirui Zhou State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Ke Gao Intelligent Software Research Center, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Yanjun Wu Intelligent Software Research Center, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Chen Zhao Intelligent Software Research Center, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Ling Li Intelligent Software Research Center, Institute of Software, CAS, Beijing, China University of Chinese Academy of Sciences, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v40i34.40155

Abstract

Developing high-performance GPU kernels is critical for AI and scientific computing, but remains challenging due to its reliance on expert crafting and poor portability. While large language models (LLMs) offer promise for automation, both general-purpose and finetuned LLMs suffer from two fundamental and conflicting limitations: correctness and efficiency. The key reason is that existing LLM-based approaches directly generate the entire optimized low-level programs, requiring exploration of an extremely vast space encompassing both optimization policies and implementation codes. To address the challenge of exploring an intractable space, we propose Macro Thinking Micro Coding (MTMC), a hierarchical framework inspired by the staged optimization strategy of human experts. It decouples optimization strategy from implementation details, ensuring efficiency through high-level strategy and correctness through low-level implementation. Specifically, Macro Thinking employs reinforcement learning to guide lightweight LLMs in efficiently exploring and learning semantic optimization strategies that maximize hardware utilization. Micro Coding leverages general-purpose LLMs to incrementally implement the stepwise optimization proposals from Macro Thinking, avoiding full-kernel generation errors. Together, they effectively navigate the vast optimization space and intricate implementation details, enabling LLMs for high-performance GPU kernel generation. Comprehensive results on widely adopted benchmarks demonstrate the superior performance of MTMC on GPU kernel generation in both accuracy and running time. On KernelBench, MTMC achieves near 100% and 70% accuracy at Levels 1-2 and 3, over 50% than SOTA general-purpose and domain-finetuned LLMs, with up to 7.3× speedup over LLMs, and 2.2× over expert-optimized PyTorch Eager kernels. On the more challenging TritonBench, MTMC attains up to 59.64% accuracy and 34× speedup. All models and datasets will be made publicly available.

Downloads

Published

2026-03-14

How to Cite

Zhu, X., Peng, S., Guo, J., Chen, Y., Guo, Q., Wen, Y., … Li, L. (2026). QiMeng-Kernel: Macro-Thinking Micro-Coding Paradigm for LLM-Based High-Performance GPU Kernel Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(34), 29168–29176. https://doi.org/10.1609/aaai.v40i34.40155

Issue

Section

AAAI Technical Track on Machine Learning XI