Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach

Authors

  • Cuong Tran Syracuse University
  • Ferdinando Fioretto Syracuse University
  • Pascal Van Hentenryck Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i11.17193

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy, (Deep) Neural Network Algorithms, Constraint Optimization

Abstract

A critical concern in data-driven decision making is to build models whose outcomes do not discriminate against some demographic groups, including gender, ethnicity, or age. To ensure non-discrimination in learning tasks, knowledge of the sensitive attributes is essential, while, in practice, these attributes may not be available due to legal and ethical requirements. To address this challenge, this paper studies a model that protects the privacy of the individuals’ sensitive information while also allowing it to learn non-discriminatory predictors. The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints while guaranteeing the privacy of sensitive attributes. The paper analyses the tension between accuracy, privacy, and fairness and the experimental evaluation illustrates the benefits of the proposed model on several prediction tasks.

Downloads

Published

2021-05-18

How to Cite

Tran, C., Fioretto, F., & Van Hentenryck, P. (2021). Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9932-9939. https://doi.org/10.1609/aaai.v35i11.17193

Issue

Section

AAAI Technical Track on Machine Learning IV