Group Fairness by Probabilistic Modeling with Latent Fair Decisions

Authors

  • YooJung Choi UCLA
  • Meihua Dang UCLA
  • Guy Van den Broeck UCLA

DOI:

https://doi.org/10.1609/aaai.v35i13.17431

Keywords:

Stochastic Models & Probabilistic Inference, Ethics -- Bias, Fairness, Transparency & Privacy, Probabilistic Graphical Models

Abstract

Machine learning systems are increasingly being used to make impactful decisions such as loan applications and criminal justice risk assessments, and as such, ensuring fairness of these systems is critical. This is often challenging as the labels in the data are biased. This paper studies learning fair probability distributions from biased data by explicitly modeling a latent variable that represents a hidden, unbiased label. In particular, we aim to achieve demographic parity by enforcing certain independencies in the learned model. We also show that group fairness guarantees are meaningful only if the distribution used to provide those guarantees indeed captures the real-world data. In order to closely model the data distribution, we employ probabilistic circuits, an expressive and tractable probabilistic model, and propose an algorithm to learn them from incomplete data. We show on real-world datasets that our approach not only is a better model of how the data was generated than existing methods but also achieves competitive accuracy. Moreover, we also evaluate our approach on a synthetic dataset in which observed labels indeed come from fair labels but with added bias, and demonstrate that the fair labels are successfully retrieved.

Downloads

Published

2021-05-18

How to Cite

Choi, Y., Dang, M., & Van den Broeck, G. (2021). Group Fairness by Probabilistic Modeling with Latent Fair Decisions. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 12051-12059. https://doi.org/10.1609/aaai.v35i13.17431

Issue

Section

AAAI Technical Track on Reasoning under Uncertainty