Improving Fairness and Privacy in Selection Problems
DOI:
https://doi.org/10.1609/aaai.v35i9.16986Keywords:
Ethics -- Bias, Fairness, Transparency & PrivacyAbstract
Supervised learning models have been increasingly used for making decisions about individuals in applications such as hiring, lending, and college admission. These models may inherit pre-existing biases from training datasets and discriminate against protected attributes (e.g., race or gender). In addition to unfairness, privacy concerns also arise when the use of models reveals sensitive personal information. Among various privacy notions, differential privacy has become popular in recent years. In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models. Unlike many existing works, we consider a scenario where a supervised model is used to select a limited number of applicants as the number of available positions is limited. This assumption is well-suited for various scenarios, such as job application and college admission. We use ``equal opportunity'' as the fairness notion and show that the exponential mechanisms can make the decision-making process perfectly fair. Moreover, the experiments on real-world datasets show that the exponential mechanism can improve both privacy and fairness, with a slight decrease in accuracy compared to the model without post-processing.Downloads
Published
2021-05-18
How to Cite
Khalili, M. M., Zhang, X., Abroshan, M., & Sojoudi, S. (2021). Improving Fairness and Privacy in Selection Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8092-8100. https://doi.org/10.1609/aaai.v35i9.16986
Issue
Section
AAAI Technical Track on Machine Learning II