Progressive Poisoned Data Isolation for Training-Time Backdoor Defense
DOI:
https://doi.org/10.1609/aaai.v38i10.29023Keywords:
ML: PrivacyAbstract
Deep Neural Networks (DNN) are susceptible to backdoor attacks where malicious attackers manipulate the model's predictions via data poisoning. It is hence imperative to develop a strategy for training a clean model using a potentially poisoned dataset. Previous training-time defense mechanisms typically employ an one-time isolation process, often leading to suboptimal isolation outcomes. In this study, we present a novel and efficacious defense method, termed Progressive Isolation of Poisoned Data (PIPD), that progressively isolates poisoned data to enhance the isolation accuracy and mitigate the risk of benign samples being misclassified as poisoned ones. Once the poisoned portion of the dataset has been identified, we introduce a selective training process to train a clean model. Through the implementation of these techniques, we ensure that the trained model manifests a significantly diminished attack success rate against the poisoned data. Extensive experiments on multiple benchmark datasets and DNN models, assessed against nine state-of-the-art backdoor attacks, demonstrate the superior performance of our PIPD method for backdoor defense. For instance, our PIPD achieves an average True Positive Rate (TPR) of 99.95% and an average False Positive Rate (FPR) of 0.06% for diverse attacks over CIFAR-10 dataset, markedly surpassing the performance of state-of-the-art methods. The code is available at https://github.com/RorschachChen/PIPD.git.Downloads
Published
2024-03-24
How to Cite
Chen, Y., Wu, H., & Zhou, J. (2024). Progressive Poisoned Data Isolation for Training-Time Backdoor Defense. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11425-11433. https://doi.org/10.1609/aaai.v38i10.29023
Issue
Section
AAAI Technical Track on Machine Learning I