STaR: Sensitive Trajectory Regulation for Unlearning in Large Reasoning Models

Authors

  • Jingjing Zhou University of Chinese Academy of Sciences
  • Gaoxiang Cong Institute of Computing Technology, Chinese Academy of Sciences
  • Li Su University of Chinese Academy of Sciences
  • Liang Li Institute of Computing Technology, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i41.40818

Abstract

Large Reasoning Models (LRMs) have advanced automated multi-step reasoning, but their ability to generate complex Chain-of-Thought (CoT) trajectories introduces severe privacy risks, as sensitive information may be deeply embedded throughout the reasoning process. Existing Large Language Models (LLMs) unlearning approaches that typically focus on modifying only final answers are insufficient for LRMs, as they fail to remove sensitive content from intermediate steps, leading to persistent privacy leakage and degraded security. To address these challenges, we propose Sensitive Trajectory Regulation (STaR), a parameter-free, inference-time unlearning framework that achieves robust privacy protection throughout the reasoning process. Specifically, we first identify sensitive content via semantic-aware detection. Then, we inject global safety constraints through secure prompt encoder. Next, we perform trajectory-aware suppression to dynamically block sensitive content across the entire reasoning chain. Finally, we apply token-level adaptive filtering to prevent both exact and paraphrased sensitive tokens during generation. Furthermore, to overcome the inadequacies of existing evaluation protocols, we introduce two metrics: Multi-Decoding Consistency Assessment (MCS), which measures the consistency of unlearning across diverse decoding strategies, and Multi-Granularity Membership Inference Attack (MIA) Evaluation, which quantifies privacy protection at both answer and reasoning-chain levels. Experiments on the R-TOFU benchmark demonstrate that STaR achieves comprehensive and stable unlearning with minimal utility loss, setting a new standard for privacy-preserving reasoning in LRMs.

Downloads

Published

2026-03-14

How to Cite

Zhou, J., Cong, G., Su, L., & Li, L. (2026). STaR: Sensitive Trajectory Regulation for Unlearning in Large Reasoning Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(41), 35121–35129. https://doi.org/10.1609/aaai.v40i41.40818

Issue

Section

AAAI Technical Track on Natural Language Processing VI