Justicia: A Stochastic SAT Approach to Formally Verify Fairness

Authors

  • Bishwamittra Ghosh National University of Singapore, Singapore
  • Debabrota Basu Chalmers University of Technology, Göteborg, Sweden Scool, Inria Lille- Nord Europe, France
  • Kuldeep S. Meel National University of Singapore, Singapore

DOI:

https://doi.org/10.1609/aaai.v35i9.16925

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy, Satisfiability

Abstract

As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a stochastic satisfiability (SSAT) framework, Justicia, that formally verifies different fairness measures of supervised learning algorithms with respect to the underlying data distribution. We instantiate Justicia on multiple classification and bias mitigation algorithms, and datasets to verify different fairness metrics, such as disparate impact, statistical parity, and equalized odds. Justicia is scalable, accurate, and operates on non-Boolean and compound sensitive attributes unlike existing distribution-based verifiers, such as FairSquare and VeriFair. Being distribution-based by design, Justicia is more robust than the verifiers, such as AIF360, that operate on specific test samples. We also theoretically bound the finite-sample error of the verified fairness measure.

Downloads

Published

2021-05-18

How to Cite

Ghosh, B., Basu, D., & Meel, K. S. (2021). Justicia: A Stochastic SAT Approach to Formally Verify Fairness. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7554-7563. https://doi.org/10.1609/aaai.v35i9.16925

Issue

Section

AAAI Technical Track on Machine Learning II