Label-Free Backdoor Attacks in Vertical Federated Learning

Authors

  • Wei Shen Wuhan University
  • Wenke Huang Wuhan University
  • Guancheng Wan Wuhan University
  • Mang Ye Wuhan University

DOI:

https://doi.org/10.1609/aaai.v39i19.34246

Abstract

Vertical Federated Learning (VFL) involves multiple clients collaborating to train a global model, with distributed features of shared samples. While it becomes a critical privacy-preserving learning paradigm, its security can be significantly compromised by backdoor attacks, where a malicious client injects a target backdoor by manipulating local data. Existing attack methods in VFL rely on the assumption that the malicious client can obtain additional knowledge about task labels, which is not applicable in VFL. In this work, we investigate a new backdoor attack paradigm in VFL, Label-Free Backdoor Attacks (LFBA), which does not require any additional task label information and is feasible in VFL settings. Specifically, while existing methods assume access to task labels or target-class samples, we demonstrate that the gradients of local embeddings reflect the semantic information of labels. It can be utilized to construct the target poison sample set. Besides, we uncover that backdoor triggers tend to be ignored and under-fitted due to the learning of original features, which hinders backdoor task optimization. To address this, we propose selectively switching poison samples to disrupt feature learning, promoting backdoor task learning while maintaining accuracy on clean data. Extensive experiments demonstrate the effectiveness of our method in various settings.

Downloads

Published

2025-04-11

How to Cite

Shen, W., Huang, W., Wan, G., & Ye, M. (2025). Label-Free Backdoor Attacks in Vertical Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 20389–20397. https://doi.org/10.1609/aaai.v39i19.34246

Issue

Section

AAAI Technical Track on Machine Learning V