Do Transformer Interpretability Methods Transfer to RNNs?
DOI:
https://doi.org/10.1609/aaai.v39i26.34969Abstract
Recent advances in recurrent neural network architectures, such as Mamba and RWKV, have enabled RNNs to match or exceed the performance of equal-size transformers in terms of language modeling perplexity and downstream evaluations, suggesting that future systems may be built on completely new architectures. In this paper, we examine if selected interpretability methods originally designed for transformer language models will transfer to these up-and-coming recurrent architectures. Specifically, we focus on steering model outputs via contrastive activation addition, on eliciting latent predictions via the tuned lens, and eliciting latent knowledge from models fine-tuned to produce false outputs under certain conditions. Our results show that most of these techniques are effective when applied to RNNs, and we show that it is possible to improve some of them by taking advantage of RNNs' compressed state.Downloads
Published
2025-04-11
How to Cite
Paulo, G., Marshall, T., & Belrose, N. (2025). Do Transformer Interpretability Methods Transfer to RNNs?. Proceedings of the AAAI Conference on Artificial Intelligence, 39(26), 27565–27572. https://doi.org/10.1609/aaai.v39i26.34969
Issue
Section
AAAI Technical Track on AI Alignment