A Conceptual Framework for Ethical Evaluation of Machine Learning Systems
DOI:
https://doi.org/10.1609/aies.v7i1.31656Abstract
Research in Responsible AI has developed a range of principles and practices to ensure that machine learning systems are used in a manner that is ethical and aligned with human values. However, a critical yet often neglected aspect of ethical ML is the ethical implications that appear when designing evaluations of ML systems. For instance, teams may have to balance a trade-off between highly informative tests to ensure downstream product safety, with potential fairness harms inherent to the implemented testing procedures. We conceptualize ethics-related concerns in standard ML evaluation techniques. Specifically, we present a utility framework, characterizing the key trade-off in ethical evaluation as balancing information gain against potential ethical harms. The framework is then a tool for characterizing challenges teams face, and systematically disentangling competing considerations that teams seek to balance. Differentiating between different types of issues encountered in evaluation allows us to highlight best practices from analogous domains, such as clinical trials and automotive crash testing, which navigate these issues in ways that can offer inspiration to improve evaluation processes in ML. Our analysis underscores the critical need for development teams to deliberately assess and manage ethical complexities that arise during the evaluation of ML systems, and for the industry to move towards designing institutional policies to support ethical evaluations.Downloads
Published
2024-10-16
How to Cite
Gupta, N. R., Hullman, J., & Subramonyam, H. (2024). A Conceptual Framework for Ethical Evaluation of Machine Learning Systems. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 534-546. https://doi.org/10.1609/aies.v7i1.31656
Issue
Section
Full Archival Papers