Identifying, Mitigating, and Anticipating Bias in Algorithmic Decisions

Authors

  • Joachim Baumann University of Zurich

DOI:

https://doi.org/10.1609/aaai.v38i21.30393

Keywords:

Algorithmic Fairness, Responsible AI, Data Science For Social Good, Feedback Loops, Ethical Automated Decision Making

Abstract

Today's machine learning (ML) applications predominantly adhere to a standard paradigm: the decision maker designs the algorithm by optimizing a model for some objective function. While this has proven to be a powerful approach in many domains, it comes with inherent side effects: the power over the algorithmic outcomes lies solely in the hands of the algorithm designer, and alternative objectives, such as fairness, are often disregarded. This is particularly problematic if the algorithm is used to make consequential decisions that affect peoples lives. My research focuses on developing principled methods to characterize and address the mismatch between these different objectives.

Downloads

Published

2024-03-24

How to Cite

Baumann, J. (2024). Identifying, Mitigating, and Anticipating Bias in Algorithmic Decisions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23385-23386. https://doi.org/10.1609/aaai.v38i21.30393