What’s Individual About Individual Fairness?

Authors

  • Shai Ben-David University of Waterloo Vector Institute
  • Pascale Gourdeau Vector Institute University of Toronto
  • Tosca Lechner Vector Institute University of Toronto
  • Ruth Urner York University

DOI:

https://doi.org/10.1609/aies.v8i1.36556

Abstract

Individual and group fairness notions abound in the machine learning literature. Each attempts to formalize harm against individuals or groups of people. In this work, we take a step back and aim to characterize, from a learning theory perspective, what is at the heart of individual fairness (IF) notions. We argue that fairness notions should be comparison-based and, in the case of IF notions, that any failure to be fair should give rise to finite evidence of unfairness. We also posit that IF notions should have an unfairness ``direction'', for example via an order on the set of potential decisions. Equipped with this framework, we present various ways unfair classifiers can be compared to each other. Comparing classifiers is essential in any situation where there is a need to choose between not-perfectly-fair classifiers, e.g., in cases where there exist unavoidable trade-offs between learning objectives. We then adapt score-based measures of individual unfairness to allow us to measure how harm is distributed between population subgroups, which is more in line with group fairness. Crucially, our set-up retains evidence of harm at the individual level, allowing for algorithmic recourse, or potential integrations within legal frameworks.

Downloads

Published

2025-10-15

How to Cite

Ben-David, S., Gourdeau, P., Lechner, T., & Urner, R. (2025). What’s Individual About Individual Fairness?. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 379-390. https://doi.org/10.1609/aies.v8i1.36556