Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces
DOI:
https://doi.org/10.1609/aaai.v38i10.29070Keywords:
ML: Ethics, Bias, and Fairness, ML: Adversarial Learning & RobustnessAbstract
As responsible AI gains importance in machine learning algorithms, properties like fairness, adversarial robustness, and causality have received considerable attention in recent years. However, despite their individual significance, there remains a critical gap in simultaneously exploring and integrating these properties. In this paper, we propose a novel approach that examines the relationship between individual fairness, adversarial robustness, and structural causal models (SCMs) in heterogeneous data spaces, particularly when dealing with discrete sensitive attributes. We use SCMs and sensitive attributes to create a fair metric and apply it to measure semantic similarity among individuals. By introducing a novel causal adversarial perturbation (CAP) and applying adversarial training, we create a new regularizer that combines individual fairness, causality, and robustness in the classifier. Our method is evaluated on both real-world and synthetic datasets, demonstrating its effectiveness in achieving an accurate classifier that simultaneously exhibits fairness, adversarial robustness, and causal awareness.Downloads
Published
2024-03-24
How to Cite
Ehyaei, A.-R., Mohammadi, K., Karimi, A.-H., Samadi, S., & Farnadi, G. (2024). Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11847-11855. https://doi.org/10.1609/aaai.v38i10.29070
Issue
Section
AAAI Technical Track on Machine Learning I