Uncertain Context: Uncertainty Quantification in Machine Learning

Authors

  • Brian Jalaian U.S. Army Research Laboratory
  • U.S. Army Research Laboratory U.S. Army Research Laboratory
  • Stephen Russell U.S. Army Research Laboratory

DOI:

https://doi.org/10.1609/aimag.v40i4.4812

Abstract

Machine learning and artificial intelligence will be deeply embedded in the intelligent systems humans use to automate tasking, optimize planning, and support decision-making. However, many of these methods can be challenged by dynamic computational contexts, resulting in uncertainty in prediction errors and overall system outputs. Therefore, it will be increasingly important for uncertainties in underlying learning-related computer models to be quantified and communicated. The goal of this article is to provide an accessible overview of computational context and its relationship to uncertainty quantification for machine learning, as well as to provide general suggestions on how to implement uncertainty quantification when doing statistical learning. Specifically, we will discuss the challenge of quantifying uncertainty in predictions using popular machine learning models. We present several sources of uncertainty and their implications on statistical models and subsequent machine learning predictions.

Downloads

Published

2019-12-20

How to Cite

Jalaian, B., Lee, M., & Russell, S. (2019). Uncertain Context: Uncertainty Quantification in Machine Learning. AI Magazine, 40(4), 40-49. https://doi.org/10.1609/aimag.v40i4.4812

Issue

Section

Special Topic Articles