One Step Closer to Unbiased Aleatoric Uncertainty Estimation

Authors

  • Wang Zhang Massachusetts Institute of Technology
  • Ziwen Martin Ma Harvard University
  • Subhro Das MIT-IBM Watson AI Lab, IBM Research
  • Tsui-Wei Lily Weng University of California San Diego
  • Alexandre Megretski Massachusetts Institute of Technology
  • Luca Daniel Massachusetts Institute of Technology
  • Lam M. Nguyen IBM Research, Thomas J. Watson Research Center

DOI:

https://doi.org/10.1609/aaai.v38i15.29627

Keywords:

ML: Calibration & Uncertainty Quantification

Abstract

Neural networks are powerful tools in various applications, and quantifying their uncertainty is crucial for reliable decision-making. In the deep learning field, the uncertainties are usually categorized into aleatoric (data) and epistemic (model) uncertainty. In this paper, we point out that the existing popular variance attenuation method highly overestimates aleatoric uncertainty. To address this issue, we proposed a new estimation method by actively de-noising the observed data. By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.

Downloads

Published

2024-03-24

How to Cite

Zhang, W., Ma, Z. M., Das, S., Weng, T.-W. L., Megretski, A., Daniel, L., & Nguyen, L. M. (2024). One Step Closer to Unbiased Aleatoric Uncertainty Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16857-16864. https://doi.org/10.1609/aaai.v38i15.29627

Issue

Section

AAAI Technical Track on Machine Learning VI