CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets

Authors

  • Bingyin Zhao Clemson University
  • Yingjie Lao Clemson University

DOI:

https://doi.org/10.1609/aaai.v36i8.20902

Keywords:

Machine Learning (ML), Computer Vision (CV)

Abstract

Poisoning attacks are emerging threats to deep neural networks where the adversaries attempt to compromise the models by injecting malicious data points in the clean training data. Poisoning attacks target either the availability or integrity of a model. The availability attack aims to degrade the overall accuracy while the integrity attack causes misclassification only for specific instances without affecting the accuracy of clean data. Although clean-label integrity attacks are proven to be effective in recent studies, the feasibility of clean-label availability attacks remains unclear. This paper, for the first time, proposes a clean-label approach, CLPA, for the poisoning availability attack. We reveal that due to the intrinsic imperfection of classifiers, naturally misclassified inputs can be considered as a special type of poisoned data, which we refer to as "natural poisoned data''. We then propose a two-phase generative adversarial net (GAN) based poisoned data generation framework along with a triplet loss function for synthesizing clean-label poisoned samples that locate in a similar distribution as natural poisoned data. The generated poisoned data are plausible to human perception and can also bypass the singular vector decomposition (SVD) based defense. We demonstrate the effectiveness of our approach on CIFAR-10 and ImageNet dataset over a variety type of models. Codes are available at: https://github.com/bxz9200/CLPA.

Downloads

Published

2022-06-28

How to Cite

Zhao, B., & Lao, Y. (2022). CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9162-9170. https://doi.org/10.1609/aaai.v36i8.20902

Issue

Section

AAAI Technical Track on Machine Learning III