[1]
Zhang, X. et al. 2026. Safety Alignment of Large Language Models via Contrasting Safe and Harmful Distributions. Proceedings of the AAAI Conference on Artificial Intelligence. 40, 41 (Mar. 2026), 34827–34835. DOI:https://doi.org/10.1609/aaai.v40i41.40785.