Identifying Reasons for Bias: An Argumentation-Based Approach

Authors

  • Madeleine Waller King's College London
  • Odinaldo Rodrigues King's College London
  • Oana Cocarascu King's College London

DOI:

https://doi.org/10.1609/aaai.v38i19.30165

Keywords:

General

Abstract

As algorithmic decision-making systems become more prevalent in society, ensuring the fairness of these systems is becoming increasingly important. Whilst there has been substantial research in building fair algorithmic decision-making systems, the majority of these methods require access to the training data, including personal characteristics, and are not transparent regarding which individuals are classified unfairly. In this paper, we propose a novel model-agnostic argumentation-based method to determine why an individual is classified differently in comparison to similar individuals. Our method uses a quantitative argumentation framework to represent attribute-value pairs of an individual and of those similar to them, and uses a well-known semantics to identify the attribute-value pairs in the individual contributing most to their different classification. We evaluate our method on two datasets commonly used in the fairness literature and illustrate its effectiveness in the identification of bias.

Published

2024-03-24

How to Cite

Waller, M., Rodrigues, O., & Cocarascu, O. (2024). Identifying Reasons for Bias: An Argumentation-Based Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21664–21672. https://doi.org/10.1609/aaai.v38i19.30165

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track