On the Pros and Cons of Active Learning for Moral Preference Elicitation
DOI:
https://doi.org/10.1609/aies.v7i1.31673Abstract
Computational preference elicitation methods are tools used to learn people’s preferences quantitatively in a given context. Recent works on preference elicitation advocate for active learning as an efficient method to iteratively construct queries (framed as comparisons between context-specific cases) that are likely to be most informative about an agent’s underlying preferences. In this work, we argue that the use of active learning for moral preference elicitation relies on certain assumptions about the underlying moral preferences, which can be violated in practice. Specifically, we highlight the following common assumptions (a) preferences are stable over time and not sensitive to the sequence of presented queries, (b) the appropriate hypothesis class is chosen to model moral preferences, and (c) noise in the agent’s responses is limited. While these assumptions can be appropriate for preference elicitation in certain domains, prior research on moral psychology suggests they may not be valid for moral judgments. Through a synthetic simulation of preferences that violate the above assumptions, we observe that active learning can have similar or worse performance than a basic random query selection method in certain settings. Yet, simulation results also demonstrate that active learning can still be viable if the degree of instability or noise is relatively small and when the agent’s preferences can be approximately represented with the hypothesis class used for learning. Our study highlights the nuances associated with effective moral preference elicitation in practice and advocates for the cautious use of active learning as a methodology to learn moral preferences.Downloads
Published
2024-10-16
How to Cite
Keswani, V., Conitzer, V., Heidari, H., Schaich Borg, J., & Sinnott-Armstrong, W. (2024). On the Pros and Cons of Active Learning for Moral Preference Elicitation. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 711-723. https://doi.org/10.1609/aies.v7i1.31673
Issue
Section
Full Archival Papers