Censored Fairness through Awareness


  • Wenbin Zhang Michigan Technological University
  • Tina Hernandez-Boussard Stanford University
  • Jeremy Weiss National Institutes of Health






There has been increasing concern within the machine learning community and beyond that Artificial Intelligence (AI) faces a bias and discrimination crisis which needs AI fairness with urgency. As many have begun to work on this problem, most existing work depends on the availability of class label for the given fairness definition and algorithm which may not align with real-world usage. In this work, we study an AI fairness problem that stems from the gap between the design of a "fair" model in the lab and its deployment in the real-world. Specifically, we consider defining and mitigating individual unfairness amidst censorship, where the availability of class label is not always guaranteed due to censorship, which is broadly applicable in a diversity of real-world socially sensitive applications. We show that our method is able to quantify and mitigate individual unfairness in the presence of censorship across three benchmark tasks, which provides the first known results on individual fairness guarantee in analysis of censored data.




How to Cite

Zhang, W., Hernandez-Boussard, T., & Weiss, J. (2023). Censored Fairness through Awareness. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14611-14619. https://doi.org/10.1609/aaai.v37i12.26708



AAAI Special Track on AI for Social Impact