Moderate Message Passing Improves Calibration: A Universal Way to Mitigate Confidence Bias in Graph Neural Networks

Authors

  • Min Wang College of Systems Engineering, National University of Defense Technology
  • Hao Yang College of Systems Engineering, National University of Defense Technology
  • Jincai Huang College of Systems Engineering, National University of Defense Technology
  • Qing Cheng College of Systems Engineering, National University of Defense Technology

DOI:

https://doi.org/10.1609/aaai.v38i19.30167

Keywords:

General

Abstract

Confidence calibration in Graph Neural Networks (GNNs) aims to align a model's predicted confidence with its actual accuracy. Recent studies have indicated that GNNs exhibit an under-confidence bias, which contrasts the over-confidence bias commonly observed in deep neural networks. However, our deeper investigation into this topic reveals that not all GNNs exhibit this behavior. Upon closer examination of message passing in GNNs, we found a clear link between message aggregation and confidence levels. Specifically, GNNs with extensive message aggregation, often seen in deep architectures or when leveraging large amounts of labeled data, tend to exhibit overconfidence. This overconfidence can be attributed to factors like over-learning and over-smoothing. Conversely, GNNs with fewer layers, known for their balanced message passing and superior node representation, may exhibit under-confidence. To counter these confidence biases, we introduce the Adaptive Unified Label Smoothing (AU-LS) technique. Our experiments show that AU-LS outperforms existing methods, addressing both over and under-confidence in various GNN scenarios.

Published

2024-03-24

How to Cite

Wang, M., Yang, H., Huang, J., & Cheng, Q. (2024). Moderate Message Passing Improves Calibration: A Universal Way to Mitigate Confidence Bias in Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21681-21689. https://doi.org/10.1609/aaai.v38i19.30167

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track