Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors
DOI:
https://doi.org/10.1609/aies.v7i1.31644Abstract
The astonishing successes of ML have raised growing concern for the fairness of modern methods when deployed in real world settings. However, studies on fairness have mostly focused on supervised ML, while unsupervised outlier detection (OD), with numerous applications in finance, security, etc., have attracted little attention. While a few studies proposed fairness-enhanced OD algorithms, they remain agnostic to the underlying driving mechanisms or sources of unfairness. Even within the supervised ML literature, there exists debate on whether unfairness stems solely from algorithmic biases (i.e. design choices) or from the biases encoded in the data on which they are trained. To close this gap, this work aims to shed light on the possible sources of unfairness in OD by auditing detection models under different data-centric factors.By injecting various known biases into the input data---as pertain to sample size disparity, under-representation, feature measurement noise, and group membership obfuscation---we find that the OD algorithms under the study all exhibit fairness pitfalls, although differing in which types of data bias they are more susceptible to. Most notable of our study is to demonstrate that OD algorithm bias is not merely a data bias problem. A key realization is that the data properties that emerge from bias injection could as well be organic---as pertain to natural group differences w.r.t. sparsity, base rate, variance, and multi-modality. Either natural or biased, such data properties can give rise to unfairness as they interact with certain algorithmic design choices. Our work provides a deeper understanding of the possible sources of OD unfairness, and serves as a framework for assessing the unfairness of future OD algorithms under specific data-centric factors. It also paves the way for future work on mitigation strategies by underscoring the susceptibility of various design choices.Downloads
Published
2024-10-16
How to Cite
Ding, X., Xi, R., & Akoglu, L. (2024). Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 384-395. https://doi.org/10.1609/aies.v7i1.31644
Issue
Section
Full Archival Papers