Answering the Unanswerable Is to Err Knowingly: Analyzing and Mitigating Abstention Failures in Large Reasoning Models

Authors

  • Yi Liu Nanjing University
  • Xiangyu Liu Nanjing University
  • Zequn Sun Nanjing University
  • Wei Hu Nanjing University

DOI:

https://doi.org/10.1609/aaai.v40i38.40496

Abstract

Large reasoning models (LRMs) have shown remarkable progress on complex reasoning tasks. However, some questions posed to LRMs are inherently unanswerable, such as math problems lacking sufficient conditions. We find that LRMs continually fail to provide appropriate abstentions when confronted with these unanswerable questions. In this paper, we systematically analyze, investigate, and resolve this issue for trustworthy AI. We first conduct a detailed analysis of the distinct response behaviors of LRMs when facing unanswerable questions. Then, we show that LRMs possess sufficient cognitive capabilities to recognize the flaws in these questions. However, they fail to exhibit appropriate abstention behavior, revealing a misalignment between their internal cognition and external response. Finally, to resolve this issue, we propose a lightweight, two-stage method that combines cognitive monitoring with inference-time intervention. Experimental results demonstrate that our method significantly improves the abstention rate while maintaining the reasoning performance.

Published

2026-03-14

How to Cite

Liu, Y., Liu, X., Sun, Z., & Hu, W. (2026). Answering the Unanswerable Is to Err Knowingly: Analyzing and Mitigating Abstention Failures in Large Reasoning Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 32231–32239. https://doi.org/10.1609/aaai.v40i38.40496

Issue

Section

AAAI Technical Track on Natural Language Processing III