Quality-Diversity Generative Sampling for Learning with Synthetic Data
DOI:
https://doi.org/10.1609/aaai.v38i18.29955Keywords:
PEAI: Bias, Fairness & Equity, CV: Bias, Fairness & PrivacyAbstract
Generative models can serve as surrogates for some real data sources by creating synthetic training datasets, but in doing so they may transfer biases to downstream tasks. We focus on protecting quality and diversity when generating synthetic training datasets. We propose quality-diversity generative sampling (QDGS), a framework for sampling data uniformly across a user-defined measure space, despite the data coming from a biased generator. QDGS is a model-agnostic framework that uses prompt guidance to optimize a quality objective across measures of diversity for synthetically generated data, without fine-tuning the generative model. Using balanced synthetic datasets generated by QDGS, we first debias classifiers trained on color-biased shape datasets as a proof-of-concept. By applying QDGS to facial data synthesis, we prompt for desired semantic concepts, such as skin tone and age, to create an intersectional dataset with a combined blend of visual features. Leveraging this balanced data for training classifiers improves fairness while maintaining accuracy on facial recognition benchmarks. Code available at: https://github.com/Cylumn/qd-generative-sampling.Downloads
Published
2024-03-24
How to Cite
Chang, A., Fontaine, M. C., Booth, S., Matarić, M. J., & Nikolaidis, S. (2024). Quality-Diversity Generative Sampling for Learning with Synthetic Data. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 19805-19812. https://doi.org/10.1609/aaai.v38i18.29955
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI