When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models

Authors

  • Keyu Wang King Abdullah University of Science and Technology Provable Responsible AI and Data Analytics (PRADA) Lab
  • Jin Li King Abdullah University of Science and Technology Provable Responsible AI and Data Analytics (PRADA) Lab University of Chinese Academy of Sciences
  • Shu Yang King Abdullah University of Science and Technology Provable Responsible AI and Data Analytics (PRADA) Lab
  • Zhuoran Zhang King Abdullah University of Science and Technology Provable Responsible AI and Data Analytics (PRADA) Lab Peking University
  • Di Wang King Abdullah University of Science and Technology Provable Responsible AI and Data Analytics (PRADA) Lab

DOI:

https://doi.org/10.1609/aaai.v40i39.40645

Abstract

Large Language Models (LLMs) often exhibit sycophantic behavior, agreeing with user-stated opinions even when those contradict factual knowledge. While prior work has documented this tendency, the internal mechanisms that enable such behavior remain poorly understood. In this paper, we provide a mechanistic account of how sycophancy arises within LLMs. We first systematically study how user opinions induce sycophancy across different model families. We find that simple opinion statements reliably induce sycophancy, whereas user expertise framing has a negligible impact. Through logit-lens analysis and causal activation patching, we identify a two-stage emergence of sycophancy: (1) a late-layer output preference shift and (2) deeper representational divergence. We also verify that user authority fails to influence behavior because models do not encode it internally. In addition, we examine how grammatical perspective affects sycophantic behavior, finding that first-person prompts (“I believe...”) consistently induce higher sycophancy rates than third-person framings (“They believe...”) by creating stronger representational perturbations in deeper layers. These findings highlight that sycophancy is not a surface-level artifact but emerges from a structural override of learned knowledge in deeper layers, with implications for alignment and truthful AI systems.

Downloads

Published

2026-03-14

How to Cite

Wang, K., Li, J., Yang, S., Zhang, Z., & Wang, D. (2026). When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33566–33574. https://doi.org/10.1609/aaai.v40i39.40645

Issue

Section

AAAI Technical Track on Natural Language Processing IV