Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction

Authors

  • Wei Qian Iowa State University
  • Chenxu Zhao Iowa State University
  • Yangyi Li Iowa State University
  • Fenglong Ma Pennsylvania State University
  • Chao Zhang Georgia Institute of Technology
  • Mengdi Huai Iowa Sate University

DOI:

https://doi.org/10.1609/aaai.v38i13.29382

Keywords:

ML: Transparent, Interpretable, Explainable ML, PEAI: Accountability, Interpretability & Explainability

Abstract

Despite the recent progress in deep neural networks (DNNs), it remains challenging to explain the predictions made by DNNs. Existing explanation methods for DNNs mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations. The fact that post-hoc methods can fail to reveal the actual original reasoning process of DNNs raises the need to build DNNs with built-in interpretability. Motivated by this, many self-explaining neural networks have been proposed to generate not only accurate predictions but also clear and intuitive insights into why a particular decision was made. However, existing self-explaining networks are limited in providing distribution-free uncertainty quantification for the two simultaneously generated prediction outcomes (i.e., a sample's final prediction and its corresponding explanations for interpreting that prediction). Importantly, they also fail to establish a connection between the confidence values assigned to the generated explanations in the interpretation layer and those allocated to the final predictions in the ultimate prediction layer. To tackle the aforementioned challenges, in this paper, we design a novel uncertainty modeling framework for self-explaining networks, which not only demonstrates strong distribution-free uncertainty modeling performance for the generated explanations in the interpretation layer but also excels in producing efficient and effective prediction sets for the final predictions based on the informative high-level basis explanations. We perform the theoretical analysis for the proposed framework. Extensive experimental evaluation demonstrates the effectiveness of the proposed uncertainty framework.

Published

2024-03-24

How to Cite

Qian, W., Zhao, C., Li, Y., Ma, F., Zhang, C., & Huai, M. (2024). Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14651-14659. https://doi.org/10.1609/aaai.v38i13.29382

Issue

Section

AAAI Technical Track on Machine Learning IV