On Robustness of Linear Classifiers to Targeted Data Poisoning
DOI:
https://doi.org/10.1609/aaai.v40i26.39301Abstract
Data poisoning is a training-time attack that undermines the trustworthiness of learned models. In a targeted data poisoning attack, an adversary manipulates the training dataset to alter the classification of a targeted test point. Given the typically large size of training dataset, manual detection of poisoning is difficult. An alternative is to automatically measure a dataset's robustness against such an attack, which is the focus of this paper. We consider a threat model wherein an adversary can only perturb the labels of the training dataset, with knowledge limited to the hypothesis space of the victim's model. In this setting, we prove that finding the robustness is an NP-Complete problem, even when hypotheses are linear classifiers. To overcome this, we present a technique that finds lower and upper bounds of robustness. Our implementation of the technique computes these bounds efficiently in practice for many publicly available datasets. We experimentally demonstrate the effectiveness of our approach. Specifically, a poisoning exceeding the identified robustness bounds significantly impacts test point classification. We are also able to compute these bounds in many more cases where state-of-the-art techniques fail.Downloads
Published
2026-03-14
How to Cite
Gupta, N., Prabhu S, S., Chakraborty, S., & R, V. (2026). On Robustness of Linear Classifiers to Targeted Data Poisoning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21531–21539. https://doi.org/10.1609/aaai.v40i26.39301
Issue
Section
AAAI Technical Track on Machine Learning III