Task and Model Agnostic Adversarial Attack on Graph Neural Networks

Authors

  • Kartik Sharma Georgia Institute of Technology, Atlanta
  • Samidha Verma Indian Institute of Technology, Delhi
  • Sourav Medya University of Illinois, Chicago
  • Arnab Bhattacharya Indian Institute of Technology, Kanpur
  • Sayan Ranu Indian Institute of Technology, Delhi

DOI:

https://doi.org/10.1609/aaai.v37i12.26761

Keywords:

General

Abstract

Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting their adoption in safety-critical applications. However, existing attack strategies rely on the knowledge of either the GNN model being used or the predictive task being attacked. Is this knowledge necessary? For example, a graph may be used for multiple downstream tasks unknown to a practical attacker. It is thus important to test the vulnerability of GNNs to adversarial perturbations in a model and task-agnostic setting. In this work, we study this problem and show that Gnns remain vulnerable even when the downstream task and model are unknown. The proposed algorithm, TANDIS (Targeted Attack via Neighborhood DIStortion) shows that distortion of node neighborhoods is effective in drastically compromising prediction performance. Although neighborhood distortion is an NP-hard problem, TANDIS designs an effective heuristic through a novel combination of Graph Isomorphism Network with deep Q-learning. Extensive experiments on real datasets show that, on average, TANDIS is up to 50% more effective than state-of-the-art techniques, while being more than 1000 times faster.

Downloads

Published

2023-06-26

How to Cite

Sharma, K., Verma, S., Medya, S., Bhattacharya, A., & Ranu, S. (2023). Task and Model Agnostic Adversarial Attack on Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15091-15099. https://doi.org/10.1609/aaai.v37i12.26761

Issue

Section

AAAI Special Track on Safe and Robust AI