Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs


  • Jiarong Xu Zhejiang University
  • Yizhou Sun UCLA
  • Xin Jiang UCLA
  • Yanhao Wang Zhejiang University
  • Chunping Wang FinVolution
  • Jiangang Lu Zhejiang University
  • Yang Yang Zhejiang University



Data Mining & Knowledge Management (DMKM)


Adversarial attacks on graphs have attracted considerable research interests. Existing works assume the attacker is either (partly) aware of the victim model, or able to send queries to it. These assumptions are, however, unrealistic. To bridge the gap between theoretical graph attacks and real-world scenarios, in this work, we propose a novel and more realistic setting: strict black-box graph attack, in which the attacker has no knowledge about the victim model at all and is not allowed to send any queries. To design such an attack strategy, we first propose a generic graph filter to unify different families of graph-based models. The strength of attacks can then be quantified by the change in the graph filter before and after attack. By maximizing this change, we are able to find an effective attack strategy, regardless of the underlying model. To solve this optimization problem, we also propose a relaxation technique and approximation theories to reduce the difficulty as well as the computational expense. Experiments demonstrate that, even with no exposure to the model, the Macro-F1 drops 6.4% in node classification and 29.5% in graph classification, which is a significant result compared with existent works.




How to Cite

Xu, J., Sun, Y., Jiang, X., Wang, Y., Wang, C., Lu, J., & Yang, Y. (2022). Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 4299-4307.



AAAI Technical Track on Data Mining and Knowledge Management