Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness

Authors

  • Carolyn Ashurst Alan Turing Institute University of Oxford
  • Ryan Carey University of Oxford
  • Silvia Chiappa DeepMind
  • Tom Everitt DeepMind

DOI:

https://doi.org/10.1609/aaai.v36i9.21182

Keywords:

Philosophy And Ethics Of AI (PEAI)

Abstract

In addition to reproducing discriminatory relationships in the training data, machine learning (ML) systems can also introduce or amplify discriminatory effects. We refer to this as introduced unfairness, and investigate the conditions under which it may arise. To this end, we propose introduced total variation as a measure of introduced unfairness, and establish graphical conditions under which it may be incentivised to occur. These criteria imply that adding the sensitive attribute as a feature removes the incentive for introduced variation under well-behaved loss functions. Additionally, taking a causal perspective, introduced path-specific effects shed light on the issue of when specific paths should be considered fair.

Downloads

Published

2022-06-28

How to Cite

Ashurst, C., Carey, R., Chiappa, S., & Everitt, T. (2022). Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9494-9503. https://doi.org/10.1609/aaai.v36i9.21182

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI