Censored Fairness through Awareness

Authors

  • Wenbin Zhang Michigan Technological University
  • Tina Hernandez-Boussard Stanford University
  • Jeremy Weiss National Institutes of Health

DOI:

https://doi.org/10.1609/aaai.v37i12.26708

Keywords:

General

Abstract

There has been increasing concern within the machine learning community and beyond that Artificial Intelligence (AI) faces a bias and discrimination crisis which needs AI fairness with urgency. As many have begun to work on this problem, most existing work depends on the availability of class label for the given fairness definition and algorithm which may not align with real-world usage. In this work, we study an AI fairness problem that stems from the gap between the design of a "fair" model in the lab and its deployment in the real-world. Specifically, we consider defining and mitigating individual unfairness amidst censorship, where the availability of class label is not always guaranteed due to censorship, which is broadly applicable in a diversity of real-world socially sensitive applications. We show that our method is able to quantify and mitigate individual unfairness in the presence of censorship across three benchmark tasks, which provides the first known results on individual fairness guarantee in analysis of censored data.

Downloads

Published

2023-06-26

How to Cite

Zhang, W., Hernandez-Boussard, T., & Weiss, J. (2023). Censored Fairness through Awareness. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14611-14619. https://doi.org/10.1609/aaai.v37i12.26708

Issue

Section

AAAI Special Track on AI for Social Impact