Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values

Authors

  • Haewon Jeong Harvard University
  • Hao Wang Harvard University
  • Flavio P. Calmon Harvard University

DOI:

https://doi.org/10.1609/aaai.v36i9.21189

Keywords:

Philosophy And Ethics Of AI (PEAI), Machine Learning (ML), Humans And AI (HAI)

Abstract

We investigate the fairness concerns of training a machine learning model using data with missing values. Even though there are a number of fairness intervention methods in the literature, most of them require a complete training set as input. In practice, data can have missing values, and data missing patterns can depend on group attributes (e.g. gender or race). Simply applying off-the-shelf fair learning algorithms to an imputed dataset may lead to an unfair model. In this paper, we first theoretically analyze different sources of discrimination risks when training with an imputed dataset. Then, we propose an integrated approach based on decision trees that does not require a separate process of imputation and learning. Instead, we train a tree with missing incorporated as attribute (MIA), which does not require explicit imputation, and we optimize a fairness-regularized objective function. We demonstrate that our approach outperforms existing fairness intervention methods applied to an imputed dataset, through several experiments on real-world datasets.

Downloads

Published

2022-06-28

How to Cite

Jeong, H., Wang, H., & Calmon, F. P. (2022). Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9558-9566. https://doi.org/10.1609/aaai.v36i9.21189

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI