Local-Global Defense against Unsupervised Adversarial Attacks on Graphs
DOI:
https://doi.org/10.1609/aaai.v37i7.25979Keywords:
ML: Graph-based Machine Learning, DMKM: Graph Mining, Social Network Analysis & Community Mining, ML: Adversarial Learning & RobustnessAbstract
Unsupervised pre-training algorithms for graph representation learning are vulnerable to adversarial attacks, such as first-order perturbations on graphs, which will have an impact on particular downstream applications. Designing an effective representation learning strategy against white-box attacks remains a crucial open topic. Prior research attempts to improve representation robustness by maximizing mutual information between the representation and the perturbed graph, which is sub-optimal because it does not adapt its defense techniques to the severity of the attack. To address this issue, we propose an unsupervised defense method that combines local and global defense to improve the robustness of representation. Note that we put forward the Perturbed Edges Harmfulness (PEH) metric to determine the riskiness of the attack. Thus, when the edges are attacked, the model can automatically identify the risk of attack. We present a method of attention-based protection against high-risk attacks that penalizes attention coefficients of perturbed edges to encoders. Extensive experiments demonstrate that our strategies can enhance the robustness of representation against various adversarial attacks on three benchmark graphs.Downloads
Published
2023-06-26
How to Cite
Jin, D., Feng, B., Guo, S., Wang, X., Wei, J., & Wang, Z. (2023). Local-Global Defense against Unsupervised Adversarial Attacks on Graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8105-8113. https://doi.org/10.1609/aaai.v37i7.25979
Issue
Section
AAAI Technical Track on Machine Learning II