Generic Adversarial Attack Framework Against Graph-based Vertical Federated Learning
DOI:
https://doi.org/10.1609/aaai.v40i42.40878Abstract
Graph-based vertical federated learning (GVFL) enables multiple parties to collaboratively train and infer over aligned nodes, where each party contributes its own local embedding derived from different attributes and adjacency relations. Adversarial inputs injected by an attacker can skew the joint prediction toward its desired outcomes while diminishing the influence of benign parties and undermining contribution. However, most attacks typically have pre-set assumptions, such as access to the server architecture, model queries, or in-domain auxiliary graphs. In this paper, we propose SGAC, an attack framework that enables domination of joint inference without relying on above assumptions. SGAC learns label-indicative embeddings and class-transferable probabilities to generate a surrogate that closely mimics the server-side classification behavior by exploiting auxiliary graphs from non-training domains. SGAC then leverages saliency over node attributes and edges on the auxiliary graphs to construct a diverse set of shadow inputs resembling highly influential test instances. With the surrogate fidelity and input diversity, SGAC crafts transferable contribution-monopoly adversarial inputs that hijack GVFL incentives. Extensive experiments across diverse model architectures validate SGAC's effectiveness.Downloads
Published
2026-03-14
How to Cite
Liu, Y., Jiang, P., Liu, Q., & Zhu, L. (2026). Generic Adversarial Attack Framework Against Graph-based Vertical Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 35662–35670. https://doi.org/10.1609/aaai.v40i42.40878
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI