Evaluating Goal Drift in Language Model Agents

Authors

  • Rauno Arike ML Alignment & Theory Scholars (MATS) University of Amsterdam
  • Elizabeth Donoway ML Alignment & Theory Scholars (MATS) University of California, Berkeley
  • Henning Bartsch ML Alignment & Theory Scholars (MATS)
  • Marius Hobbhahn Apollo Research

DOI:

https://doi.org/10.1609/aies.v8i1.36541

Abstract

As language models (LMs) are increasingly deployed as autonomous agents, their robust adherence to human-assigned objectives becomes crucial for safe operation. When these agents operate independently for extended periods without human oversight, even initially well-specified goals may gradually shift. Detecting and measuring goal drift - an agent's tendency to deviate from its original objective over time - presents significant challenges, as goals can shift gradually, causing only subtle behavioral changes. This paper proposes a novel approach to analyzing goal drift in LM agents. In our experiments, agents are first explicitly given a goal through their system prompt, then exposed to competing objectives through environmental pressures. We demonstrate that while the best-performing agent (a scaffolded version of Claude 3.5 Sonnet) maintains nearly perfect goal adherence for more than 100,000 tokens in our most difficult evaluation setting, all evaluated models exhibit some degree of goal drift. We also find that goal drift correlates with models' increasing susceptibility to pattern-matching behaviors as the context length grows.

Downloads

Published

2025-10-15

How to Cite

Arike, R., Donoway, E., Bartsch, H., & Hobbhahn, M. (2025). Evaluating Goal Drift in Language Model Agents. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 192-203. https://doi.org/10.1609/aies.v8i1.36541