Are Large Language Models Moral Hypocrites? A Study Based on Moral Foundations

Authors

  • José Luiz Nunes Deparment of Informatics, PUC Rio FGV Direito Rio
  • Guilherme F. C. F. Almeida INSPER Institute of Education and Research
  • Marcelo de Araujo Federal University of Rio de Janeiro State University of Rio de Janeiro
  • Simone D. J. Barbosa Deparment of Informatics, PUC Rio

DOI:

https://doi.org/10.1609/aies.v7i1.31704

Abstract

Large language models (LLMs) have taken centre stage in debates on Artificial Intelligence. Yet there remains a gap in how to assess LLMs' conformity to important human values. In this paper, we investigate whether state-of-the-art LLMs, GPT-4 and Claude 2.1 (Gemini Pro and LLAMA 2 did not generate valid results) are moral hypocrites. We employ two research instruments based on the Moral Foundations Theory: (i) the Moral Foundations Questionnaire (MFQ), which investigates which values are considered morally relevant in abstract moral judgements; and (ii) the Moral Foundations Vignettes (MFVs), which evaluate moral cognition in concrete scenarios related to each moral foundation. We characterise conflicts in values between these different abstractions of moral evaluation as hypocrisy. We found that both models displayed reasonable consistency within each instrument compared to humans, but they displayed contradictory and hypocritical behaviour when we compared the abstract values present in the MFQ to the evaluation of concrete moral violations of the MFV.

Downloads

Published

2024-10-16

How to Cite

Nunes, J. L., Almeida, G. F. C. F., Araujo, M. de, & Barbosa, S. D. J. (2024). Are Large Language Models Moral Hypocrites? A Study Based on Moral Foundations. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1074-1087. https://doi.org/10.1609/aies.v7i1.31704