CertiFair: A Framework for Certified Global Fairness of Neural Networks

Authors

  • Haitham Khedr University of California, Irvine
  • Yasser Shoukry University of California, Irvine

DOI:

https://doi.org/10.1609/aaai.v37i7.25994

Keywords:

ML: Bias and Fairness, ML: Adversarial Learning & Robustness

Abstract

We consider the problem of whether a Neural Network (NN) model satisfies global individual fairness. Individual Fairness (defined in (Dwork et al. 2012)) suggests that similar individuals with respect to a certain task are to be treated similarly by the decision model. In this work, we have two main objectives. The first is to construct a verifier which checks whether the fairness property holds for a given NN in a classification task or provides a counterexample if it is violated, i.e., the model is fair if all similar individuals are classified the same, and unfair if a pair of similar individuals are classified differently. To that end, we construct a sound and complete verifier that verifies global individual fairness properties of ReLU NN classifiers using distance-based similarity metrics. The second objective of this paper is to provide a method for training provably fair NN classifiers from unfair (biased) data. We propose a fairness loss that can be used during training to enforce fair outcomes for similar individuals. We then provide provable bounds on the fairness of the resulting NN. We run experiments on commonly used fairness datasets that are publicly available and we show that global individual fairness can be improved by 96 % without a significant drop in test accuracy.

Downloads

Published

2023-06-26

How to Cite

Khedr, H., & Shoukry, Y. (2023). CertiFair: A Framework for Certified Global Fairness of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8237-8245. https://doi.org/10.1609/aaai.v37i7.25994

Issue

Section

AAAI Technical Track on Machine Learning II