Group-Aware Threshold Adaptation for Fair Classification

Authors

  • Taeuk Jang Purdue University
  • Pengyi Shi Purdue University
  • Xiaoqian Wang Purdue University

DOI:

https://doi.org/10.1609/aaai.v36i6.20657

Keywords:

Machine Learning (ML)

Abstract

The fairness in machine learning is getting increasing attention, as its applications in different fields continue to expand and diversify. To mitigate the discriminated model behaviors between different demographic groups, we introduce a novel post-processing method to optimize over multiple fairness constraints through group-aware threshold adaptation. We propose to learn adaptive classification thresholds for each demographic group by optimizing the confusion matrix estimated from the probability distribution of a classification model output. As we only need an estimated probability distribution of model output instead of the classification model structure, our post-processing model can be applied to a wide range of classification models and improve fairness in a model-agnostic manner and ensure privacy. This even allows us to post-process existing fairness methods to further improve the trade-off between accuracy and fairness. Moreover, our model has low computational cost. We provide rigorous theoretical analysis on the convergence of our optimization algorithm and the trade-off between accuracy and fairness. Our method theoretically enables a better upper bound in near optimality than previous method under the same condition. Experimental results demonstrate that our method outperforms state-of-the-art methods and obtains the result that is closest to the theoretical accuracy-fairness trade-off boundary.

Downloads

Published

2022-06-28

How to Cite

Jang, T., Shi, P., & Wang, X. (2022). Group-Aware Threshold Adaptation for Fair Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6988-6995. https://doi.org/10.1609/aaai.v36i6.20657

Issue

Section

AAAI Technical Track on Machine Learning I