[1]
X. Zhang, Z. Zhao, W. Shi, K. Xu, D. Huang, and X. Hu, “Safety Alignment of Large Language Models via Contrasting Safe and Harmful Distributions”, AAAI, vol. 40, no. 41, pp. 34827–34835, Mar. 2026.