SCANS: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering

Authors

  • Zouying Cao Department of Computer Science and Engineering, Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3
  • Yifei Yang Department of Computer Science and Engineering, Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3
  • Hai Zhao Department of Computer Science and Engineering, Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3

DOI:

https://doi.org/10.1609/aaai.v39i22.34521

Abstract

Safety alignment is indispensable for Large language models (LLMs) to defend threats from malicious instructions. However, recent researches reveal safety-aligned LLMs tend to reject benign queries due to the exaggerated safety issue, limiting their helpfulness. In this paper, we propose a Safety-Conscious Activation Steering (SCANS) method to mitigate the exaggerated safety concerns in aligned LLMs. First, SCANS extracts the refusal steering vectors within the activation space and utilizes vocabulary projection to anchor some specific safety-critical layers which influence model refusal behavior. Second, by tracking the hidden state transition, SCANS identifies the steering direction and steers the model behavior accordingly, achieving a balance between exaggerated safety and adequate safety. Experiments show that SCANS achieves new state-of-the-art performance on XSTest and OKTest benchmarks, without impairing their defense capability against harmful queries and maintaining almost unchanged model capability.

Published

2025-04-11

How to Cite

Cao, Z., Yang, Y., & Zhao, H. (2025). SCANS: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23523–23531. https://doi.org/10.1609/aaai.v39i22.34521

Issue

Section

AAAI Technical Track on Natural Language Processing I