A Simple and Yet Fairly Effective Defense for Graph Neural Networks

Authors

  • Sofiane Ennadir KTH Royal Institute of Technology
  • Yassine Abbahaddou Ecole Polytechnique
  • Johannes F. Lutzeyer Ecole Polytechnique
  • Michalis Vazirgiannis Ecole Polytechnique KTH Royal Institute of Technology
  • Henrik Boström KTH Royal Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v38i19.30098

Keywords:

General

Abstract

Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data. However, concerns have arisen regarding the vulnerability of GNNs to small adversarial perturbations. Existing defense methods against such perturbations suffer from high time complexity and can negatively impact the model's performance on clean graphs. To address these challenges, this paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture. We establish a theoretical connection between noise injection and the enhancement of GNN robustness, highlighting the effectiveness of our approach. We further conduct extensive empirical evaluations on the node classification task to validate our theoretical findings, focusing on two popular GNNs: the GCN and GIN. The results demonstrate that NoisyGNN achieves superior or comparable defense performance to existing methods while minimizing added time complexity. The NoisyGNN approach is model-agnostic, allowing it to be integrated with different GNN architectures. Successful combinations of our NoisyGNN approach with existing defense techniques demonstrate even further improved adversarial defense results. Our code is publicly available at: https://github.com/Sennadir/NoisyGNN.

Published

2024-03-24

How to Cite

Ennadir, S., Abbahaddou, Y., Lutzeyer, J. F., Vazirgiannis, M., & Boström, H. (2024). A Simple and Yet Fairly Effective Defense for Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21063-21071. https://doi.org/10.1609/aaai.v38i19.30098

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track