Sustaining Fairness via Incremental Learning

Authors

  • Somnath Basu Roy Chowdhury UNC Chapel Hill
  • Snigdha Chaturvedi University of North Carolina, Chapel Hill

DOI:

https://doi.org/10.1609/aaai.v37i6.25833

Keywords:

ML: Bias and Fairness, SNLP: Bias, Fairness, Transparency & Privacy

Abstract

Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.

Downloads

Published

2023-06-26

How to Cite

Basu Roy Chowdhury, S., & Chaturvedi, S. (2023). Sustaining Fairness via Incremental Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 6797-6805. https://doi.org/10.1609/aaai.v37i6.25833

Issue

Section

AAAI Technical Track on Machine Learning I