Popularizing Fairness: Group Fairness and Individual Welfare

Authors

  • Andrew Estornell Washington University in St Louis
  • Sanmay Das George Mason University
  • Brendan Juba Washington University in St Louis
  • Yevgeniy Vorobeychik Washington University in St. Louis

DOI:

https://doi.org/10.1609/aaai.v37i6.25910

Keywords:

ML: Bias and Fairness

Abstract

Group-fair learning methods typically seek to ensure that some measure of prediction efficacy for (often historically) disadvantaged minority groups is comparable to that for the majority of the population. When a principal seeks to adopt a group-fair approach to replace another, the principal may face opposition from those who feel they may be harmed by the switch, and this, in turn, may deter adoption. We propose that a potential mitigation to this concern is to ensure that a group-fair model is also popular, in the sense that, for a majority of the target population, it yields a preferred distribution over outcomes compared with the conventional model. In this paper, we show that state of the art fair learning approaches are often unpopular in this sense. We propose several efficient algorithms for postprocessing an existing group-fair learning scheme to improve its popularity while retaining fairness. Through extensive experiments, we demonstrate that the proposed postprocessing approaches are highly effective in practice.

Downloads

Published

2023-06-26

How to Cite

Estornell, A., Das, S., Juba, B., & Vorobeychik, Y. (2023). Popularizing Fairness: Group Fairness and Individual Welfare. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7485-7493. https://doi.org/10.1609/aaai.v37i6.25910

Issue

Section

AAAI Technical Track on Machine Learning I