Attacking Data Transforming Learners at Training Time

Authors

  • Scott Alfeld Amherst College
  • Ara Vartanian University of Wisconsin--Madison
  • Lucas Newman-Johnson Amherst College
  • Benjamin I.P. Rubinstein University of Melbourne

DOI:

https://doi.org/10.1609/aaai.v33i01.33013167

Abstract

While machine learning systems are known to be vulnerable to data-manipulation attacks at both training and deployment time, little is known about how to adapt attacks when the defender transforms data prior to model estimation. We consider the setting where the defender Bob first transforms the data then learns a model from the result; Alice, the attacker, perturbs Bob’s input data prior to him transforming it. We develop a general-purpose “plug and play” framework for gradient-based attacks based on matrix differentials, focusing on ordinary least-squares linear regression. This allows learning algorithms and data transformations to be paired and composed arbitrarily: attacks can be adapted through the use of the chain rule—analogous to backpropagation on neural network parameters—to compositional learning maps. Bestresponse attacks can be computed through matrix multiplications from a library of attack matrices for transformations and learners. Our treatment of linear regression extends state-ofthe-art attacks at training time, by permitting the attacker to affect both features and targets optimally and simultaneously. We explore several transformations broadly used across machine learning with a driving motivation for our work being autogressive modeling. There, Bob transforms a univariate time series into a matrix of observations and vector of target values which can then be fed into standard learners. Under this learning reduction, a perturbation from Alice to a single value of the time series affects features of several data points along with target values.

Downloads

Published

2019-07-17

How to Cite

Alfeld, S., Vartanian, A., Newman-Johnson, L., & Rubinstein, B. I. (2019). Attacking Data Transforming Learners at Training Time. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3167-3174. https://doi.org/10.1609/aaai.v33i01.33013167

Issue

Section

AAAI Technical Track: Machine Learning