Explanation Difference: Bridging Procedural and Distributional Fairness
DOI:
https://doi.org/10.1609/aies.v8i2.36612Abstract
Fairness in Machine Learning (Fair ML) is often presented as a trade-off between predictive performance and equality of predicted values. This view of fairness, commonly referred to as distributional fairness, fails to consider how a model arrives at its predictions. This may lead to Fair ML models evaluating protected groups on differing criteria, creating incentive structures that further perpetuate societal biases. Alternatively, procedural fairness attempts to ensure a fair decision-making process, but often does so at the expense of distributional fairness. In this paper, we propose a new procedural fairness measure, Explanation Difference (EDiff), and further illustrate the importance of treating fairness as a multi-objective optimization problem considering distributional and procedural fairness, and predictive performance. We conduct an extensive experimental evaluation showing 1) the shortcomings of solely optimizing for distributional or procedural fairness, and that 2) our multi-objective approach utilizing EDiff can build fair ML models in both distributional and procedural fairness while retaining strong predictive performance.Downloads
Published
2025-10-15
How to Cite
Germino, J., Zhao, Y., Derr, T., Moniz, N., & Chawla, N. V. (2025). Explanation Difference: Bridging Procedural and Distributional Fairness. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(2), 1078-1090. https://doi.org/10.1609/aies.v8i2.36612