Fair Generative Models via Transfer Learning

Authors

  • Christopher T.H. Teo Singapore University of Technology and Design
  • Milad Abdollahzadeh Singapore University of Technology and Design
  • Ngai-Man Cheung Singapore University of Technology and Design

DOI:

https://doi.org/10.1609/aaai.v37i2.25339

Keywords:

CV: Bias, Fairness & Privacy, ML: Bias and Fairness

Abstract

This work addresses fair generative models. Dataset biases have been a major cause of unfairness in deep generative models. Previous work had proposed to augment large, biased datasets with small, unbiased reference datasets. Under this setup, a weakly-supervised approach has been proposed, which achieves state-of-the-art quality and fairness in generated samples. In our work, based on this setup, we propose a simple yet effective approach. Specifically, first, we propose fairTL, a transfer learning approach to learn fair generative models. Under fairTL, we pre-train the generative model with the available large, biased datasets and subsequently adapt the model using the small, unbiased reference dataset. We find that our fairTL can learn expressive sample generation during pre-training, thanks to the large (biased) dataset. This knowledge is then transferred to the target model during adaptation, which also learns to capture the underlying fair distribution of the small reference dataset. Second, we propose fairTL++, where we introduce two additional innovations to improve upon fairTL: (i) multiple feedback and (ii) Linear-Probing followed by Fine-Tuning (LP-FT). Taking one step further, we consider an alternative, challenging setup when only a pre-trained (potentially biased) model is available but the dataset that was used to pre-train the model is inaccessible. We demonstrate that our proposed fairTL and fairTL++ remain very effective under this setup. We note that previous work requires access to the large, biased datasets and is incapable of handling this more challenging setup. Extensive experiments show that fairTL and fairTL++ achieve state-of-the-art in both quality and fairness of generated samples. The code and additional resources can be found at bearwithchris.github.io/fairTL/.

Downloads

Published

2023-06-26

How to Cite

Teo, C. T., Abdollahzadeh, M., & Cheung, N.-M. (2023). Fair Generative Models via Transfer Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2429-2437. https://doi.org/10.1609/aaai.v37i2.25339

Issue

Section

AAAI Technical Track on Computer Vision II