Accurate Fairness: Improving Individual Fairness without Trading Accuracy

Authors

  • Xuran Li State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
  • Peng Wu State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
  • Jing Su State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v37i12.26674

Keywords:

General

Abstract

Accuracy and individual fairness are both crucial for trustworthy machine learning, but these two aspects are often incompatible with each other so that enhancing one aspect may sacrifice the other inevitably with side effects of true bias or false fairness. We propose in this paper a new fairness criterion, accurate fairness, to align individual fairness with accuracy. Informally, it requires the treatments of an individual and the individual's similar counterparts to conform to a uniform target, i.e., the ground truth of the individual. We prove that accurate fairness also implies typical group fairness criteria over a union of similar sub-populations. We then present a Siamese fairness in-processing approach to minimize the accuracy and fairness losses of a machine learning model under the accurate fairness constraints. To the best of our knowledge, this is the first time that a Siamese approach is adapted for bias mitigation. We also propose fairness confusion matrix-based metrics, fair-precision, fair-recall, and fair-F1 score, to quantify a trade-off between accuracy and individual fairness. Comparative case studies with popular fairness datasets show that our Siamese fairness approach can achieve on average 1.02%-8.78% higher individual fairness (in terms of fairness through awareness) and 8.38%-13.69% higher accuracy, as well as 10.09%-20.57% higher true fair rate, and 5.43%-10.01% higher fair-F1 score, than the state-of-the-art bias mitigation techniques. This demonstrates that our Siamese fairness approach can indeed improve individual fairness without trading accuracy. Finally, the accurate fairness criterion and Siamese fairness approach are applied to mitigate the possible service discrimination with a real Ctrip dataset, by on average fairly serving 112.33% more customers (specifically, 81.29% more customers in an accurately fair way) than baseline models.

Downloads

Published

2023-06-26

How to Cite

Li, X., Wu, P., & Su, J. (2023). Accurate Fairness: Improving Individual Fairness without Trading Accuracy. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14312-14320. https://doi.org/10.1609/aaai.v37i12.26674

Issue

Section

AAAI Special Track on AI for Social Impact