A Human-in-the-Loop Fairness-Aware Model Selection Framework for Complex Fairness Objective Landscapes
DOI:
https://doi.org/10.1609/aies.v7i1.31719Abstract
Fairness-aware Machine Learning (FairML) applications are often characterized by complex social objectives and legal requirements, frequently involving multiple, potentially conflicting notions of fairness. Despite the well-known Impossibility Theorem of Fairness and extensive theoretical research on the statistical and socio-technical trade-offs between fairness metrics, many FairML tools still optimize or constrain for a single fairness objective. However, this one-sided optimization can inadvertently lead to violations of other relevant notions of fairness. In this socio-technical and empirical study, we frame fairness as a Many-Objective (MaO) problem by treating fairness metrics as conflicting objectives in a multi-objective (MO) sense. We introduce ManyFairHPO, a human-in-the-loop, fairness-aware model selection framework that enables practitioners to effectively navigate complex and nuanced fairness objective landscapes. ManyFairHPO aids in the identification, evaluation, and balancing of fairness metric conflicts and their related social consequences, leading to more informed and socially responsible model-selection decisions. Through a comprehensive empirical evaluation and a case study on the Law School Admissions problem, we demonstrate the effectiveness of ManyFairHPO in balancing multiple fairness objectives, mitigating risks such as self-fulfilling prophecies, and providing interpretable insights to guide stakeholders in making fairness-aware modeling decisions.Downloads
Published
2024-10-16
How to Cite
Robertson, J., Schmidt, T., Hutter, F., & Awad, N. (2024). A Human-in-the-Loop Fairness-Aware Model Selection Framework for Complex Fairness Objective Landscapes. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1231-1242. https://doi.org/10.1609/aies.v7i1.31719
Issue
Section
Full Archival Papers