LLM-based Simulations of Human Behavior in Psychological Research

Authors

  • Santiago Flórez Sánchez Universidad de los Andes

DOI:

https://doi.org/10.1609/aies.v8i1.36603

Abstract

What does it mean for LLMs to replace human participants in psychological research? My analysis of this question is structured around two central philosophical problems: scientific representation and epistemic opacity. By examining how these issues shape trustful and distrustful stances toward using LLMs as models of the human mind, I describe tendencies in the scientific literature and their relation to emerging interpretability and elicitation techniques. In this regard, my primary contributions are, first, a philosophical framework for understanding the conceptual tensions that shape the debate, and second, a taxonomy that maps stances in empirical literature to their corresponding methodological innovations. I show that both trustful and distrustful positions, despite their disagreements, foster the methodological innovations necessary for building a more robust epistemological foundation for LLM-based simulations. Accordingly, empirical research stances must be responsive to the pressures and constraints implied by their underlying philosophical intuitions. This means, for instance, that trustful stances should explore protocols leveraging fine-tuning and prompt design to evaluate correspondence and consistency in more complex behavioral patterns—thereby working around model opacity—while distrustful stances should further develop parallels at the algorithmic and implementational levels between LLMs and the human mind through XAI techniques and computational cognitive science—to probe the representational relationship.

Downloads

Published

2025-10-15

How to Cite

Flórez Sánchez, S. (2025). LLM-based Simulations of Human Behavior in Psychological Research. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 955-962. https://doi.org/10.1609/aies.v8i1.36603