Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks

Authors

  • Uday Shankar Shanthamallu Arizona State University
  • Jayaraman J. Thiagarajan Lawrence Livermore National Labs
  • Andreas Spanias Arizona State University

Keywords:

Graph-based Machine Learning, Adversarial Learning & Robustness, Representation Learning, Semi-Supervised Learning

Abstract

Graph Neural Networks (GNNs), a generalization of neural networks to graph-structured data, are often implemented using message passes between entities of a graph. While GNNs are effective for node classification, link prediction and graph classification, they are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation. In this work, we propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models, particularly against poisoning attacks to the graph structure, by leveraging epistemic uncertainties from the message passing framework. More specifically, we propose to build a surrogate predictor that does not directly access the graph structure, but systematically extracts reliable knowledge from a standard GNN through a novel uncertainty-matching strategy. Interestingly, this uncoupling makes UM-GNN immune to evasion attacks by design, and achieves significantly improved robustness against poisoning attacks. Using empirical studies with standard benchmarks and a suite of global and target attacks, we demonstrate the effectiveness of UM-GNN, when compared to existing baselines including the state-of-the-art robust GCN.

Downloads

Published

2021-05-18

How to Cite

Shanthamallu, U. S., J. Thiagarajan, J., & Spanias, A. (2021). Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9524-9532. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17147

Issue

Section

AAAI Technical Track on Machine Learning IV