Robust Heterogeneous Graph Neural Networks against Adversarial Attacks

Authors

  • Mengmei Zhang Beijing University of Posts and Telecommunications
  • Xiao Wang Beijing University of Posts and Telecommunications
  • Meiqi Zhu Beijing University of Posts and Telecommunications
  • Chuan Shi Beijing University of Posts and Telecommunications
  • Zhiqiang Zhang Ant Group
  • Jun Zhou Ant Group

DOI:

https://doi.org/10.1609/aaai.v36i4.20357

Keywords:

Data Mining & Knowledge Management (DMKM)

Abstract

Heterogeneous Graph Neural Networks (HGNNs) have drawn increasing attention in recent years and achieved outstanding performance in many tasks. However, despite their wide use, there is currently no understanding of their robustness to adversarial attacks. In this work, we first systematically study the robustness of HGNNs and show that they can be easily fooled by adding the adversarial edge between the target node and large-degree node (i.e., hub). Furthermore, we show two key reasons for such vulnerability of HGNNs: one is perturbation enlargement effect, i.e., HGNNs, failing to encode transiting probability, will enlarge the effect of the adversarial hub in comparison of GCNs, and the other is soft attention mechanism, i.e., such mechanism assigns positive attention values to obviously unreliable neighbors. Based on the two facts, we propose a novel robust HGNN framework RoHe against topology adversarial attacks by equipping an attention purifier, which can prune malicious neighbors based on topology and feature. Specifically, to eliminate the perturbation enlargement, we introduce the metapath-based transiting probability as the prior criterion of the purifier, restraining the confidence of malicious neighbors from the adversarial hub. Then the purifier learns to mask out neighbors with low confidence, thus can effectively alleviate the negative effect of malicious neighbors in the soft attention mechanism. Extensive experiments on different benchmark datasets for multiple HGNNs are conducted, where the considerable improvement of HGNNs under adversarial attacks will demonstrate the effectiveness and generalization ability of our defense framework.

Downloads

Published

2022-06-28

How to Cite

Zhang, M., Wang, X., Zhu, M., Shi, C., Zhang, Z., & Zhou, J. (2022). Robust Heterogeneous Graph Neural Networks against Adversarial Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 4363-4370. https://doi.org/10.1609/aaai.v36i4.20357

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management