Interpretable Reward Model via Sparse Autoencoder
DOI:
https://doi.org/10.1609/aaai.v40i41.40783Abstract
Large language models (LLMs) have been widely deployed across numerous fields. Reinforcement Learning from Human Feedback (RLHF) leverages reward models (RMs) as proxies for human preferences to align LLM behaviors with human values, making the accuracy, reliability, and interpretability of RMs critical for effective alignment. However, traditional RMs lack interpretability, offer limited insight into the reasoning behind reward assignments, and are inflexible toward user preference shifts. While recent multidimensional RMs aim for improved interpretability, they often fail to provide feature-level attribution and require costly annotations. To overcome these limitations, we introduce the Sparse Autoencoder-Enhanced Reward Model (SARM), a novel architecture that integrates a pretrained Sparse Autoencoder (SAE) into a reward model. SARM maps the hidden activations of LLM-based RM into an interpretable, sparse, and monosemantic feature space, from which a scalar head aggregates feature activations to produce transparent and conceptually meaningful reward scores. Empirical evaluations demonstrate that SARM facilitates direct feature-level attribution of reward assignments, allows dynamic adjustment to preference shifts, and achieves superior alignment performance compared to conventional reward models.Downloads
Published
2026-03-14
How to Cite
Zhang, S., Shi, W., Li, S., Liao, J., Liang, T., Cai, H., & Wang, X. (2026). Interpretable Reward Model via Sparse Autoencoder. Proceedings of the AAAI Conference on Artificial Intelligence, 40(41), 34808–34816. https://doi.org/10.1609/aaai.v40i41.40783
Issue
Section
AAAI Technical Track on Natural Language Processing VI