Fine-Grained Interpretation of Political Opinions in Large Language Models

Authors

  • Jingyu Hu University of Bristol, UK
  • Mengyue Yang University of Bristol, UK
  • Mengnan Du New Jersey Institute of Technology, USA
  • Weiru Liu University of Bristol, UK

DOI:

https://doi.org/10.1609/aaai.v40i45.41199

Abstract

Studies of LLMs’ political opinions mainly evaluate their open-ended responses. Recent work indicates misalignment between LLMs responses and their internal intentions. This motivates us to probe LLMs' internal mechanisms and uncover their internal political states. Additionally, analysis of LLMs' political opinions often relies on single-axis concepts, which can lead to concept confounds. Our work extends this to multi-dimensions and applies interpretable techniques for more transparent LLM political concept learning. Specifically, we designed a four-dimensional political learning framework and constructed a corresponding dataset for fine-grained political concept vector learning. These vectors can detect and intervene in LLM internals. Experiments are conducted on eight open-source LLMs with three representation engineering techniques. Results show these vectors can disentangle political concept confounds. Detection tasks validate the semantic meaning of the vectors and show good generalization and robustness in OOD settings. Intervention experiments show that these vectors can implicitly intervene in LLMs, generating responses with targeted political leanings. These insights reveal the need for more transparent auditing for future AI governance.

Downloads

Published

2026-03-14

How to Cite

Hu, J., Yang, M., Du, M., & Liu, W. (2026). Fine-Grained Interpretation of Political Opinions in Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(45), 38570-38579. https://doi.org/10.1609/aaai.v40i45.41199

Issue

Section

AAAI Special Track on AI for Social Impact I