Explaining the Uncertainty in AI-Assisted Decision Making

Authors

  • Thao Le The University of Melbourne

DOI:

https://doi.org/10.1609/aaai.v37i13.26920

Keywords:

Interpretability And Explainability, Human-Computer Interaction, Human-Machine Teams

Abstract

The aim of this project is to improve human decision-making using explainability; specifically, how to explain the (un)certainty of machine learning models. Prior research has used uncertainty measures to promote trust and decision-making. However, the direction of explaining why the AI prediction is confident (or not confident) in its prediction needs to be addressed. By explaining the model uncertainty, we can promote trust, improve understanding and improve decision-making for users.

Downloads

Published

2024-07-15

How to Cite

Le, T. (2024). Explaining the Uncertainty in AI-Assisted Decision Making. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16119-16120. https://doi.org/10.1609/aaai.v37i13.26920