The Unreasonable Effectiveness of Deep Evidential Regression

Authors

  • Nis Meinert Pasteur Labs
  • Jakob Gawlikowski German Aerospace Center (DLR)
  • Alexander Lavin Pasteur Labs

DOI:

https://doi.org/10.1609/aaai.v37i8.26096

Keywords:

ML: Calibration & Uncertainty Quantification, ML: Classification and Regression, ML: Deep Neural Architectures, RU: Uncertainty Representations

Abstract

There is a significant need for principled uncertainty reasoning in machine learning systems as they are increasingly deployed in safety-critical domains. A new approach with uncertainty-aware regression-based neural networks (NNs), based on learning evidential distributions for aleatoric and epistemic uncertainties, shows promise over traditional deterministic methods and typical Bayesian NNs, notably with the capabilities to disentangle aleatoric and epistemic uncertainties. Despite some empirical success of Deep Evidential Regression (DER), there are important gaps in the mathematical foundation that raise the question of why the proposed technique seemingly works. We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a heuristic rather than an exact uncertainty quantification. We go on to discuss corrections and redefinitions of how aleatoric and epistemic uncertainties should be extracted from NNs.

Downloads

Published

2023-06-26

How to Cite

Meinert, N., Gawlikowski, J., & Lavin, A. (2023). The Unreasonable Effectiveness of Deep Evidential Regression. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9134-9142. https://doi.org/10.1609/aaai.v37i8.26096

Issue

Section

AAAI Technical Track on Machine Learning III