On the Vulnerability of Backdoor Defenses for Federated Learning

Authors

  • Pei Fang Tongji University
  • Jinghui Chen Penn State University

DOI:

https://doi.org/10.1609/aaai.v37i10.26393

Keywords:

PEAI: Safety, Robustness & Trustworthiness, ML: Adversarial Learning & Robustness

Abstract

Federated learning (FL) is a popular distributed machine learning paradigm which enables jointly training a global model without sharing clients' data. However, its repetitive server-client communication gives room for possible backdoor attacks which aims to mislead the global model into a targeted misprediction when a specific trigger pattern is presented. In response to such backdoor threats on federated learning, various defense measures have been proposed. In this paper, we study whether the current defense mechanisms truly neutralize the backdoor threats from federated learning in a practical setting by proposing a new federated backdoor attack framework for possible countermeasures. Different from traditional training (on triggered data) and rescaling (the malicious client model) based backdoor injection, the proposed backdoor attack framework (1) directly modifies (a small proportion of) local model weights to inject the backdoor trigger via sign flips; (2) jointly optimize the trigger pattern with the client model, thus is more persistent and stealthy for circumventing existing defenses. In a case study, we examine the strength and weaknesses of several recent federated backdoor defenses from three major categories and provide suggestions to the practitioners when training federated models in practice.

Downloads

Published

2023-06-26

How to Cite

Fang, P., & Chen, J. (2023). On the Vulnerability of Backdoor Defenses for Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 11800-11808. https://doi.org/10.1609/aaai.v37i10.26393

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI