Uncertainty Regularized Evidential Regression

Authors

  • Kai Ye University of Pittsburgh
  • Tiejin Chen Arizona State University
  • Hua Wei Arizona State University
  • Liang Zhan University of Pittsburgh

DOI:

https://doi.org/10.1609/aaai.v38i15.29583

Keywords:

ML: Calibration & Uncertainty Quantification, ML: Classification and Regression

Abstract

The Evidential Regression Network (ERN) represents a novel approach that integrates deep learning with Dempster-Shafer's theory to predict a target and quantify the associated uncertainty. Guided by the underlying theory, specific activation functions must be employed to enforce non-negative values, which is a constraint that compromises model performance by limiting its ability to learn from all samples. This paper provides a theoretical analysis of this limitation and introduces an improvement to overcome it. Initially, we define the region where the models can't effectively learn from the samples. Following this, we thoroughly analyze the ERN and investigate this constraint. Leveraging the insights from our analysis, we address the limitation by introducing a novel regularization term that empowers the ERN to learn from the whole training set. Our extensive experiments substantiate our theoretical findings and demonstrate the effectiveness of the proposed solution.

Published

2024-03-24

How to Cite

Ye, K., Chen, T., Wei, H., & Zhan, L. (2024). Uncertainty Regularized Evidential Regression. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16460-16468. https://doi.org/10.1609/aaai.v38i15.29583

Issue

Section

AAAI Technical Track on Machine Learning VI