Are Large Language Models Moral Hypocrites? A Study Based on Moral Foundations
DOI:
https://doi.org/10.1609/aies.v7i1.31704Abstract
Large language models (LLMs) have taken centre stage in debates on Artificial Intelligence. Yet there remains a gap in how to assess LLMs' conformity to important human values. In this paper, we investigate whether state-of-the-art LLMs, GPT-4 and Claude 2.1 (Gemini Pro and LLAMA 2 did not generate valid results) are moral hypocrites. We employ two research instruments based on the Moral Foundations Theory: (i) the Moral Foundations Questionnaire (MFQ), which investigates which values are considered morally relevant in abstract moral judgements; and (ii) the Moral Foundations Vignettes (MFVs), which evaluate moral cognition in concrete scenarios related to each moral foundation. We characterise conflicts in values between these different abstractions of moral evaluation as hypocrisy. We found that both models displayed reasonable consistency within each instrument compared to humans, but they displayed contradictory and hypocritical behaviour when we compared the abstract values present in the MFQ to the evaluation of concrete moral violations of the MFV.Downloads
Published
2024-10-16
How to Cite
Nunes, J. L., Almeida, G. F. C. F., Araujo, M. de, & Barbosa, S. D. J. (2024). Are Large Language Models Moral Hypocrites? A Study Based on Moral Foundations. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1074-1087. https://doi.org/10.1609/aies.v7i1.31704
Issue
Section
Full Archival Papers