Social Misattributions in Conversations with Large Language Models

Authors

  • Andrea Ferrario Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Dalle Molle Institute for Artificial Intelligence (IDSIA), Lugano, Switzerland ETH Zurich, Zurich, Switzerland
  • Alberto Termine University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Dalle Molle Institute for Artificial Intelligence (IDSIA), Lugano, Switzerland Institut für Geschichte und Ethik der Medizin, TUM, Munich, Germany
  • Alessandro Facchini University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Dalle Molle Institute for Artificial Intelligence (IDSIA), Lugano, Switzerland

DOI:

https://doi.org/10.1609/aies.v8i1.36600

Abstract

We investigate a typology of socially and ethically risky phenomena emerging from the interaction between humans and large language model (LLM)-based conversational systems. As they relate to the way in which humans attribute social identity components, such as social roles, to LLM-based conversational systems, we term these phenomena `social misattributions.' Drawing on foundational works in interactional socio-linguistics, interpersonal pragmatics, and recent debates in the philosophy of technology, we argue that these social misattributions represent higher-order forms of anthropomorphisation of LLM-based conversational systems that are not justified by their technical capabilities and follow from the social context of conversational interactions. We discuss the risks these misattributions pose to human users, including emotional manipulation and unwarranted trust, and propose mitigation strategies. Our recommendations emphasise the importance of fostering social transparency and exploring approaches, such as frictional design, that are currently promoted in the research domain of human-centred artificial intelligence.

Downloads

Published

2025-10-15

How to Cite

Ferrario, A., Termine, A., & Facchini, A. (2025). Social Misattributions in Conversations with Large Language Models. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 913-925. https://doi.org/10.1609/aies.v8i1.36600