Constructing a Fair Classifier with Generated Fair Data


  • Taeuk Jang Purdue University
  • Feng Zheng Southern University of Science and Technology
  • Xiaoqian Wang Purdue University


Ethics -- Bias, Fairness, Transparency & Privacy


Fairness in machine learning is getting rising attention as it is directly related to real-world applications and social problems. Recent methods have been explored to alleviate the discrimination between certain demographic groups that are characterized by sensitive attributes (such as race, age, or gender). Some studies have found that the data itself is biased, so training directly on the data causes unfair decision making. Models directly trained on raw data can replicate or even exacerbate bias in the prediction between demographic groups. This leads to vastly different prediction performance in different demographic groups. In order to address this issue, we propose a new approach to improve machine learning fairness by generating fair data. We introduce a generative model to generate cross-domain samples w.r.t. multiple sensitive attributes. This ensures that we can generate infinite number of samples that are balanced \wrt both target label and sensitive attributes to enhance fair prediction. By training the classifier solely with the synthetic data and then transfer the model to real data, we can overcome the under-representation problem which is non-trivial since collecting real data is extremely time and resource consuming. We provide empirical evidence to demonstrate the benefit of our model with respect to both fairness and accuracy.




How to Cite

Jang, T., Zheng, F., & Wang, X. (2021). Constructing a Fair Classifier with Generated Fair Data. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7908-7916. Retrieved from



AAAI Technical Track on Machine Learning II