A Fair Generative Model Using LeCam Divergence

Authors

  • Soobin Um KAIST
  • Changho Suh KAIST

DOI:

https://doi.org/10.1609/aaai.v37i8.26196

Keywords:

ML: Bias and Fairness, CV: Bias, Fairness & Privacy, ML: Deep Generative Models & Autoencoders, ML: Deep Neural Network Algorithms

Abstract

We explore a fairness-related challenge that arises in generative models. The challenge is that biased training data with imbalanced demographics may yield a high asymmetry in size of generated samples across distinct groups. We focus on practically-relevant scenarios wherein demographic labels are not available and therefore the design of a fair generative model is non-straightforward. In this paper, we propose an optimization framework that regulates the unfairness under such practical settings via one statistical measure, LeCam (LC)-divergence. Specifically to quantify the degree of unfairness, we employ a balanced-yet-small reference dataset and then measure its distance with generated samples using the LC-divergence, which is shown to be particularly instrumental to a small size of the reference dataset. We take a variational optimization approach to implement the LC-based measure. Experiments on benchmark real datasets demonstrate that the proposed framework can significantly improve the fairness performance while maintaining realistic sample quality for a wide range of the reference set size all the way down to 1% relative to training set.

Downloads

Published

2023-06-26

How to Cite

Um, S., & Suh, C. (2023). A Fair Generative Model Using LeCam Divergence. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 10034-10042. https://doi.org/10.1609/aaai.v37i8.26196

Issue

Section

AAAI Technical Track on Machine Learning III