Medical AI, Categories of Value Conflict, and Conflict Bypasses
DOI:
https://doi.org/10.1609/aies.v7i1.31740Abstract
It is becoming clear that, in the process of aligning AI with human values, one glaring ethical problem is that of value conflict. It is not obvious what we should do when two compelling values (such as autonomy and safety) come into conflict with one another in the design or implementation of a medical AI technology. This paper shares findings from a scoping review at the intersection of three concepts—AI, moral value, and health—that have to do with value conflict and arbitration. The paper looks at some important and unique cases of value conflict, and then describes three possible categories of value conflict: personal value conflict, interpersonal or intercommunal value conflict, and definitional value conflict. It then describes three general paths forward in addressing value conflict: additional ethical theory, additional empirical evidence, and bypassing the conflict altogether. Finally, it reflects on the efficacy of these three paths forward as ways of addressing the three categories of value conflict, and motions toward what is needed for better approaching value conflicts in medical AI.Downloads
Published
2024-10-16
How to Cite
Victor, G., & Bélisle-Pipon, J.-C. (2024). Medical AI, Categories of Value Conflict, and Conflict Bypasses. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1482-1489. https://doi.org/10.1609/aies.v7i1.31740
Issue
Section
Full Archival Papers