Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

Authors

  • Jinyuan Jia Duke University
  • Xiaoyu Cao Duke University
  • Neil Zhenqiang Gong Duke University

DOI:

https://doi.org/10.1609/aaai.v35i9.16971

Keywords:

Adversarial Learning & Robustness, Safety, Robustness & Trustworthiness, Privacy & Security, Adversarial Attacks & Robustness

Abstract

In a data poisoning attack, an attacker modifies, deletes, and/or inserts some training examples to corrupt the learnt machine learning model. Bootstrap Aggregating (bagging) is a well known ensemble learning method, which trains multiple base models on random subsamples of a training dataset using a base learning algorithm and uses majority vote to predict labels of testing examples. We prove the intrinsic certified robustness of bagging against data poisoning attacks. Specifically, we show that bagging with an arbitrary base learning algorithm provably predicts the same label for a testing example when the number of modified, deleted, and/or inserted training examples is bounded by a threshold. Moreover, we show that our derived threshold is tight if no assumptions on the base learning algorithm are made. We evaluate our method on MNIST and CIFAR10. For instance, our method achieves a certified accuracy of 91.1% on MNIST when arbitrarily modifying, deleting, and/or inserting 100 training examples. Code is available at: https://github.com/jjy1994/BaggingCertifyDataPoisoning.

Downloads

Published

2021-05-18

How to Cite

Jia, J., Cao, X., & Gong, N. Z. (2021). Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7961-7969. https://doi.org/10.1609/aaai.v35i9.16971

Issue

Section

AAAI Technical Track on Machine Learning II