The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective

Authors

  • Naman Goel ETH Zurich
  • Alfonso Amayuelas EPFL Lausanne
  • Amit Deshpande Microsoft Research
  • Amit Sharma Microsoft Research

DOI:

https://doi.org/10.1609/aaai.v35i9.16926

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy, Bias, Fairness & Equity, Human-in-the-loop Machine Learning, Causality

Abstract

Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm also decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.

Downloads

Published

2021-05-18

How to Cite

Goel, N., Amayuelas, A., Deshpande, A., & Sharma, A. (2021). The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7564-7573. https://doi.org/10.1609/aaai.v35i9.16926

Issue

Section

AAAI Technical Track on Machine Learning II