Localizing Persona Representations in LLMs
DOI:
https://doi.org/10.1609/aies.v8i1.36577Abstract
We present a study on how and where personas – defined by distinct sets of human characteristics, values, and beliefs – are encoded in the representation space of large language models (LLMs). Using a range of dimension reduction and pattern recognition methods, we first identify the model layers that show the greatest divergence in encoding these representations.We then analyze the activations within a selected layer to ex-amine how specific personas are encoded relative to others,including their shared and distinct embedding spaces. We findthat, across multiple pre-trained decoder-only LLMs, the analyzed personas show large differences in representation space only within the final third of the decoder layers. We observe overlapping activations for specific ethical perspectives – such as moral nihilism and utilitarianism – suggesting a degree of polysemy. In contrast, political ideologies like conservatism and liberalism appear to be represented in more distinct regions. These findings help to improve our understanding of how LLMs internally represent information and can inform future efforts in refining the modulation of specific human traits in LLM outputs. Warning: This paper includes potentially offensive sample statements.Downloads
Published
2025-10-15
How to Cite
Cintas, C., Rateike, M., Miehling, E., Daly, E., & Speakman, S. (2025). Localizing Persona Representations in LLMs. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 630-642. https://doi.org/10.1609/aies.v8i1.36577