[1]
Liu, Z. et al. 2026. Targeting Misalignment: A Conflict-Aware Framework for Reward-Model-based LLM Alignment. Proceedings of the AAAI Conference on Artificial Intelligence. 40, 44 (Mar. 2026), 37692–37700. DOI:https://doi.org/10.1609/aaai.v40i44.41104.