Faking Fairness via Stealthily Biased Sampling

Authors

  • Kazuto Fukuchi RIKEN Center for Advanced Intelligence Project
  • Satoshi Hara Osaka University
  • Takanori Maehara RIKEN Center for Advanced Intelligence Project

DOI:

https://doi.org/10.1609/aaai.v34i01.5377

Abstract

Auditing fairness of decision-makers is now in high demand. To respond to this social demand, several fairness auditing tools have been developed. The focus of this study is to raise an awareness of the risk of malicious decision-makers who fake fairness by abusing the auditing tools and thereby deceiving the social communities. The question is whether such a fraud of the decision-maker is detectable so that the society can avoid the risk of fake fairness. In this study, we answer this question negatively. We specifically put our focus on a situation where the decision-maker publishes a benchmark dataset as the evidence of his/her fairness and attempts to deceive a person who uses an auditing tool that computes a fairness metric. To assess the (un)detectability of the fraud, we explicitly construct an algorithm, the stealthily biased sampling, that can deliberately construct an evil benchmark dataset via subsampling. We show that the fraud made by the stealthily based sampling is indeed difficult to detect both theoretically and empirically.

Downloads

Published

2020-04-03

How to Cite

Fukuchi, K., Hara, S., & Maehara, T. (2020). Faking Fairness via Stealthily Biased Sampling. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 412-419. https://doi.org/10.1609/aaai.v34i01.5377

Issue

Section

AAAI Special Technical Track: AI for Social Impact