An Information-Flow Perspective on Algorithmic Fairness
DOI:
https://doi.org/10.1609/aaai.v38i14.29458Keywords:
ML: Ethics, Bias, and Fairness, ML: Information Theory, PEAI: Bias, Fairness & Equity, RU: Causality, RU: Graphical ModelsAbstract
This work presents insights gained by investigating the relationship between algorithmic fairness and the concept of secure information flow. The problem of enforcing secure information flow is well-studied in the context of information security: If secret information may "flow" through an algorithm or program in such a way that it can influence the program’s output, then that is considered insecure information flow as attackers could potentially observe (parts of) the secret. There is a strong correspondence between secure information flow and algorithmic fairness: if protected attributes such as race, gender, or age are treated as secret program inputs, then secure information flow means that these "secret" attributes cannot influence the result of a program. While most research in algorithmic fairness evaluation concentrates on studying the impact of algorithms (often treating the algorithm as a black-box), the concepts derived from information flow can be used both for the analysis of disparate treatment as well as disparate impact w.r.t. a structural causal model. In this paper, we examine the relationship between quantitative as well as qualitative information-flow properties and fairness. Moreover, based on this duality, we derive a new quantitative notion of fairness called fairness spread, which can be easily analyzed using quantitative information flow and which strongly relates to counterfactual fairness. We demonstrate that off-the-shelf tools for information-flow properties can be used in order to formally analyze a program's algorithmic fairness properties, including the new notion of fairness spread as well as established notions such as demographic parity.Downloads
Published
2024-03-24
How to Cite
Teuber, S., & Beckert, B. (2024). An Information-Flow Perspective on Algorithmic Fairness. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15337-15345. https://doi.org/10.1609/aaai.v38i14.29458
Issue
Section
AAAI Technical Track on Machine Learning V