On Testing for Discrimination Using Causal Models

Authors

  • Hana Chockler causaLens King's College London
  • Joseph Y. Halpern Cornell University

DOI:

https://doi.org/10.1609/aaai.v36i5.20494

Keywords:

Knowledge Representation And Reasoning (KRR), Reasoning Under Uncertainty (RU), Domain(s) Of Application (APP)

Abstract

Consider a bank that uses an AI system to decide which loan applications to approve. We want to ensure that the system is fair, that is, it does not discriminate against applicants based on a predefined list of sensitive attributes, such as gender and ethnicity. We expect there to be a regulator whose job it is to certify the bank’s system as fair or unfair. We consider issues that the regulator will have to confront when making such a decision, including the precise definition of fairness, dealing with proxy variables, and dealing with what we call allowed variables, that is, variables such as salary on which the decision is allowed to depend, despite being correlated with sensitive variables. We show (among other things) that the problem of deciding fairness as we have defined it is co-NP-complete, but then argue that, despite that, in practice the problem should be manageable.

Downloads

Published

2022-06-28

How to Cite

Chockler, H., & Halpern, J. Y. (2022). On Testing for Discrimination Using Causal Models. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5548-5555. https://doi.org/10.1609/aaai.v36i5.20494

Issue

Section

AAAI Technical Track on Knowledge Representation and Reasoning