Utilize the Flow Before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning

Authors

  • Runchuan Zhu Peking University Shanghai Artificial Intelligence Laboratory
  • Zhipeng Ma Southwest Jiaotong University
  • Jiang Wu Shanghai Artificial Intelligence Laboratory
  • Junyuan Gao University of Chinese Academy of Sciences Shanghai Artificial Intelligence Laboratory
  • Jiaqi Wang Shanghai Artificial Intelligence Laboratory
  • Dahua Lin Shanghai Artificial Intelligence Laboratory
  • Conghui He Shanghai Artificial Intelligence Laboratory

DOI:

https://doi.org/10.1609/aaai.v39i24.34812

Abstract

Refusal-Aware Instruction Tuning (RAIT) enables Large Language Models (LLMs) to refuse to answer unknown questions. By modifying responses of unknown questions in the training data to refusal responses such as ''I don't know", RAIT enhances the reliability of LLMs and reduces their hallucination. Generally, RAIT modifies training samples based on the correctness of the initial LLM's response. However, this crude approach can cause LLMs to excessively refuse answering questions they could have correctly answered, the problem we call over-refusal. In this paper, we explore two primary causes of over-refusal: Static conflict occurs when similar samples within the LLM’s feature space receive differing supervision signals (original vs. modified ''I don't know"). Dynamic conflict arises as the LLM's evolving knowledge during SFT enables it to answer previously unanswerable questions, but the now-answerable training samples still retain the original ''I don't know" supervision signals from the initial LLM state, leading to inconsistencies. These conflicts cause the trained LLM to misclassify known questions as unknown, resulting in over-refusal. To address this issue, we introduce Certainty Represented Knowledge Flow for Refusal-Aware Instructions Tuning (CRaFT). CRaFT centers on two main contributions: First, we additionally incorporate response certainty to selectively filter and modify data, reducing static conflicts. Second, we implement preliminary rehearsal training to characterize changes in the LLM's knowledge state, which helps mitigate dynamic conflicts during the fine-tuning process. We conducted extensive experiments on open-ended question answering and multiple-choice question task. Experiment results show that CRaFT can improve LLM's overall performance during the RAIT process.

Downloads

Published

2025-04-11

How to Cite

Zhu, R., Ma, Z., Wu, J., Gao, J., Wang, J., Lin, D., & He, C. (2025). Utilize the Flow Before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 26157–26165. https://doi.org/10.1609/aaai.v39i24.34812

Issue

Section

AAAI Technical Track on Natural Language Processing III