Adversarial Fairness Network

Authors

  • Taeuk Jang Purdue University
  • Xiaoqian Wang Purdue University
  • Heng Huang University of Maryland at College Park

DOI:

https://doi.org/10.1609/aaai.v38i20.30220

Keywords:

General

Abstract

Fairness is becoming a rising concern in machine learning. Recent research has discovered that state-of-the-art models are amplifying social bias by making biased prediction towards some population groups (characterized by sensitive features like race or gender). Such unfair prediction among groups renders trust issues and ethical concerns in machine learning, especially for sensitive fields such as employment, criminal justice, and trust score assessment. In this paper, we introduce a new framework to improve machine learning fairness. The goal of our model is to minimize the influence of sensitive feature from the perspectives of both data input and predictive model. To achieve this goal, we reformulate the data input by eliminating the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature. We propose to learn the sensitive-irrelevant input via sampling among features and design an adversarial network to minimize the dependence between the reformulated input and the sensitive information. Empirical results validate that our model achieves comparable or better results than related state-of-the-art methods w.r.t. both fairness metrics and prediction performance.

Published

2024-03-24

How to Cite

Jang, T., Wang, X., & Huang, H. (2024). Adversarial Fairness Network. Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22159-22166. https://doi.org/10.1609/aaai.v38i20.30220