Public Attitudes on Performance for Algorithmic and Human Decision-Makers (Extended Abstract)

Authors

  • Kirk Bansak University of California, Berkeley
  • Elisabeth Paulson Harvard Business School

DOI:

https://doi.org/10.1609/aies.v7i1.31619

Abstract

This study explores public preferences between algorithmic and human decision-makers (DMs) in high-stakes contexts, how these preferences are impacted by performance metrics, and whether the public's evaluation of performance differs when considering algorithmic versus human DMs. Leveraging a conjoint experimental design, respondents (n = 9,000) chose between pairs of DM profiles in two scenarios: pre-trial release decisions and bank loan decisions. DM profiles varied on the DM’s type (human vs. algorithm) and on three metrics—defendant crime rate/loan default rate, false positive rate (FPR) among white defendants/applicants, and FPR among minority defendants/applicants—as well as an implicit (un)fairness metric defined by the absolute difference between the two FPRs. Controlling for performance, we observe a general tendency to favor human DMs, though this is driven by a subset of respondents who expect human DMs to perform better in the real world, and there is an analogous group with the opposite preference for algorithmic DMs. We also find that the relative importance of the four performance metrics remains consistent across DM type, suggesting that the public's preferences related to DM performance do not vary fundamentally between algorithmic and human DMs. Taken together, the results collectively suggest that people have very different beliefs about what type of DM (human or algorithmic) will deliver better performance and should be preferred, but they have similar desires in terms of what they want that performance to be regardless of DM type.

Downloads

Published

2024-10-16

How to Cite

Bansak, K., & Paulson, E. (2024). Public Attitudes on Performance for Algorithmic and Human Decision-Makers (Extended Abstract). Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 81-81. https://doi.org/10.1609/aies.v7i1.31619