MaxEnt Loss: Calibrating Graph Neural Networks under Out-of-Distribution Shift (Student Abstract)

Authors

  • Dexter Neo National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v38i21.30487

Keywords:

Graph Neural Networks, Calibration, Uncertainty Estimation, Machine Learning Safety, Out-of-Distribution

Abstract

We present a new, simple and effective loss function for calibrating graph neural networks (GNNs). Miscalibration is the problem whereby a model's probabilities does not reflect it's correctness, making it difficult and possibly dangerous for real-world deployment. We compare our method against other baselines on a novel ID and OOD graph form of the Celeb-A faces dataset. Our findings show that our method improves calibration for GNNs, which are not immune to miscalibration in-distribution (ID) and out-of-distribution (OOD). Our code is available for review at https://github.com/dexterdley/CS6208/tree/main/Project.

Published

2024-03-24

How to Cite

Neo, D. (2024). MaxEnt Loss: Calibrating Graph Neural Networks under Out-of-Distribution Shift (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23594-23596. https://doi.org/10.1609/aaai.v38i21.30487