On The Stability of Moral Preferences: A Problem with Computational Elicitation Methods

Authors

  • Kyle Boerstler Activision
  • Vijay Keswani Duke University
  • Lok Chan Duke University
  • Jana Schaich Borg Duke University
  • Vincent Conitzer Carnegie Mellon University University of Oxford
  • Hoda Heidari Carnegie Mellon University
  • Walter Sinnott-Armstrong Duke University

DOI:

https://doi.org/10.1609/aies.v7i1.31626

Abstract

Preference elicitation frameworks feature heavily in the research on participatory ethical AI tools and provide a viable mechanism to enquire and incorporate the moral values of various stakeholders. As part of the elicitation process, surveys about moral preferences, opinions, and judgments are typically administered only once to each participant. This methodological practice is reasonable if participants’ responses are stable over time such that, all other things being held constant, their responses today will be the same as their responses to the same questions at a later time. However, we do not know how often that is the case. It is possible that participants’ true moral preferences change, are subject to temporary moods or whims, or are influenced by environmental factors we don’t track. If participants’ moral responses are unstable in such ways, it would raise important methodological and theoretical issues for how participants’ true moral preferences, opinions, and judgments can be ascertained. We address this possibility here by asking the same survey participants the same moral questions about which patient should receive a kidney when only one is available ten times in ten different sessions over two weeks, varying only presentation order across sessions. We measured how often participants gave different responses to simple (Study One) and more complicated (Study Two) controversial and uncontroversial repeated scenarios. On average, the fraction of times participants changed their responses to controversial scenarios (i.e., were unstable) was around 10-18% (±14-15%) across studies, and this instability is observed to have positive associations with response time and decision-making difficulty. We discuss the implications of these results for the efficacy of common moral preference elicitation methods, highlighting the role of response instability in potentially causing value misalignment between the stakeholders and AI tools trained on their moral judgments.

Downloads

Published

2024-10-16

How to Cite

Boerstler, K., Keswani, V., Chan, L., Schaich Borg, J., Conitzer, V., Heidari, H., & Sinnott-Armstrong, W. (2024). On The Stability of Moral Preferences: A Problem with Computational Elicitation Methods. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 156-167. https://doi.org/10.1609/aies.v7i1.31626