All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI

Authors

  • Canfer Akbulut Google DeepMind
  • Laura Weidinger Google DeepMind
  • Arianna Manzini Google DeepMind
  • Iason Gabriel Google DeepMind
  • Verena Rieser Google DeepMind

DOI:

https://doi.org/10.1609/aies.v7i1.31613

Abstract

The development of highly-capable conversational agents, underwritten by large language models, has the potential to shape user interaction with this technology in profound ways, particularly when the technology is anthropomorphic, or appears human-like. Although the effects of anthropomorphic AI are often benign, anthropomorphic design features also create new kinds of risk. For example, users may form emotional connections to human-like AI, creating the risk of infringing on user privacy and autonomy through over-reliance. To better understand the possible pitfalls of anthropomorphic AI systems, we make two contributions: first, we explore anthropomorphic features that have been embedded in interactive systems in the past, and leverage this precedent to highlight the current implications of anthropomorphic design. Second, we propose research directions for informing the ethical design of anthropomorphic AI. In advancing the responsible development of AI, we promote approaches to the ethical foresight, evaluation, and mitigation of harms arising from user interactions with anthropomorphic AI.

Downloads

Published

2024-10-16

How to Cite

Akbulut, C., Weidinger, L., Manzini, A., Gabriel, I., & Rieser, V. (2024). All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI. Proceedings of the AAAI ACM Conference on AI, Ethics, and Society, 7(1), 13–26. https://doi.org/10.1609/aies.v7i1.31613