Algorithmic Fairness Beyond Legally Protected Groups and When Group Labels Are Unknown

Authors

  • Abdoul Jalil D. Mahamadou Stanford University
  • Judy W. Gichoya Emory University
  • Artem A. Trotsyuk Stanford University

DOI:

https://doi.org/10.1609/aies.v8i1.36582

Abstract

The algorithmic fairness literature has focused on defin-ing fairness group labels based on legally protected groups. This assumes that populations at risk of unfairness are known and that equity for these groups translates to broader fairness. However, these risks missing emerging or context-specific at-risk populations. We illustrate this through a review of 73 fairness in healthcare AI studies published between 2020 and 2024, as well as three case studies conducted at Stanford Health Care. The review re-veals disproportionate use of protected characteristics (90%), socioeconomic factors (19%), clinical factors (14%), and system and institutional factors (5%) as group labels. Through the case studies, we show how stakeholder engagement in ethical AI assessment, primarily designed to surface value conflicts, helps identify case-specific vulnerable populations that can inform fairness interven-tions. This study shows the need to expand fairness group label definitions to include a broader range of context-informed attributes. Doing so can help ensure that bias mitigation strategies are better grounded in real-world so-cial contexts, leading to more context-aware definitions of harm and equity.

Downloads

Published

2025-10-15

How to Cite

D. Mahamadou, A. J., Gichoya, J. W., & A. Trotsyuk, A. (2025). Algorithmic Fairness Beyond Legally Protected Groups and When Group Labels Are Unknown. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 692-704. https://doi.org/10.1609/aies.v8i1.36582