Identifying, Mitigating, and Anticipating Bias in Algorithmic Decisions


  • Joachim Baumann University of Zurich



Algorithmic Fairness, Responsible AI, Data Science For Social Good, Feedback Loops, Ethical Automated Decision Making


Today's machine learning (ML) applications predominantly adhere to a standard paradigm: the decision maker designs the algorithm by optimizing a model for some objective function. While this has proven to be a powerful approach in many domains, it comes with inherent side effects: the power over the algorithmic outcomes lies solely in the hands of the algorithm designer, and alternative objectives, such as fairness, are often disregarded. This is particularly problematic if the algorithm is used to make consequential decisions that affect peoples lives. My research focuses on developing principled methods to characterize and address the mismatch between these different objectives.




How to Cite

Baumann, J. (2024). Identifying, Mitigating, and Anticipating Bias in Algorithmic Decisions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23385-23386.