Group Causal Policy Optimization for Post-Training Large Language Models

Authors

  • Ziyin Gu Institute of Software Chinese Academy of Sciences, University of the Chinese Academy of Sciences
  • Jingyao Wang Institute of Software Chinese Academy of Sciences, University of the Chinese Academy of Sciences
  • Ran Zuo Institute of Software Chinese Academy of Sciences, University of the Chinese Academy of Sciences, Communication University of China
  • Chuxiong Sun Institute of Software Chinese Academy of Sciences, University of the Chinese Academy of Sciences
  • Zeen Song Institute of Software Chinese Academy of Sciences, University of the Chinese Academy of Sciences
  • Changwen Zheng Institute of Software Chinese Academy of Sciences, University of the Chinese Academy of Sciences
  • Wenwen Qiang Institute of Software Chinese Academy of Sciences, University of the Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i36.40341

Abstract

Recent advances in large language models (LLMs) have broadened their applicability across diverse tasks, yet specialized domains still require targeted post-training. Among existing methods, Group Relative Policy Optimization (GRPO) stands out for its efficiency, leveraging groupwise relative rewards while avoiding costly value function learning. However, GRPO treats candidate responses as independent, overlooking semantic interactions such as complementarity and contradiction. To address this challenge, we first introduce a Structural Causal Model (SCM) that reveals hidden dependencies among candidate responses induced by conditioning on a final integrated output, forming a collider structure. Then, our causal analysis leads to two insights: (1) projecting responses onto a causally-informed subspace improves prediction quality, and (2) this projection yields a better baseline than query-only conditioning. Building on these insights, we propose Group Causal Policy Optimization (GCPO), which integrates causal structure into optimization through two key components: a causally-informed reward adjustment and a novel KL-regularization term that aligns the policy with a causally-projected reference distribution. Comprehensive experimental evaluations on various benchmarks demonstrate that GCPO consistently surpasses existing methods.

Published

2026-03-14

How to Cite

Gu, Z., Wang, J., Zuo, R., Sun, C., Song, Z., Zheng, C., & Qiang, W. (2026). Group Causal Policy Optimization for Post-Training Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(36), 30834-30842. https://doi.org/10.1609/aaai.v40i36.40341

Issue

Section

AAAI Technical Track on Natural Language Processing I