Robust Fairness Under Covariate Shift

Authors

  • Ashkan Rezaei University of Illinois at Chicago
  • Anqi Liu California Institute of Technology
  • Omid Memarrast University of Illinois at Chicago
  • Brian D. Ziebart University of Illinois at Chicago

DOI:

https://doi.org/10.1609/aaai.v35i11.17135

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy, Adversarial Learning & Robustness, Classification and Regression

Abstract

Making predictions that are fair with regard to protected attributes (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same distribution. In practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals interacting with the machine learning system change. We investigate fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same. We seek fair decisions under these assumptions on target data with unknown labels. We propose an approach that obtains the predictor that is robust to the worst-case testing performance while satisfying target fairness requirements and matching statistical properties of the source data. We demonstrate the benefits of our approach on benchmark prediction tasks.

Downloads

Published

2021-05-18

How to Cite

Rezaei, A., Liu, A., Memarrast, O., & Ziebart, B. D. (2021). Robust Fairness Under Covariate Shift. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9419-9427. https://doi.org/10.1609/aaai.v35i11.17135

Issue

Section

AAAI Technical Track on Machine Learning IV