Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks

Authors

  • Jinyuan Jia Duke University
  • Yupei Liu Duke University
  • Xiaoyu Cao Duke University
  • Neil Zhenqiang Gong Duke University

DOI:

https://doi.org/10.1609/aaai.v36i9.21191

Keywords:

Philosophy And Ethics Of AI (PEAI), Machine Learning (ML), Computer Vision (CV)

Abstract

Data poisoning attacks and backdoor attacks aim to corrupt a machine learning classifier via modifying, adding, and/or removing some carefully selected training examples, such that the corrupted classifier makes incorrect predictions as the attacker desires. The key idea of state-of-the-art certified defenses against data poisoning attacks and backdoor attacks is to create a majority vote mechanism to predict the label of a testing example. Moreover, each voter is a base classifier trained on a subset of the training dataset. Classical simple learning algorithms such as k nearest neighbors (kNN) and radius nearest neighbors (rNN) have intrinsic majority vote mechanisms. In this work, we show that the intrinsic majority vote mechanisms in kNN and rNN already provide certified robustness guarantees against data poisoning attacks and backdoor attacks. Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses. Our results serve as standard baselines for future certified defenses against data poisoning attacks and backdoor attacks.

Downloads

Published

2022-06-28

How to Cite

Jia, J., Liu, Y., Cao, X., & Gong, N. Z. (2022). Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9575-9583. https://doi.org/10.1609/aaai.v36i9.21191

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI