Data Poisoning Attacks against Autoregressive Models

Authors

  • Scott Alfeld University of Wisconsin, Madison
  • Xiaojin Zhu University of Wisconsin, Madison
  • Paul Barford University of Wisconsin, Madison

DOI:

https://doi.org/10.1609/aaai.v30i1.10237

Keywords:

Adversarial Learning, Time Series Forecasting, Data Poisoning Attacks

Abstract

Forecasting models play a key role in money-making ventures in many different markets. Such models are often trained on data from various sources, some of which may be untrustworthy.An actor in a given market may be incentivised to drive predictions in a certain direction to their own benefit.Prior analyses of intelligent adversaries in a machine-learning context have focused on regression and classification.In this paper we address the non-iid setting of time series forecasting.We consider a forecaster, Bob, using a fixed, known model and a recursive forecasting method.An adversary, Alice, aims to pull Bob's forecasts toward her desired target series, and may exercise limited influence on the initial values fed into Bob's model.We consider the class of linear autoregressive models, and a flexible framework of encoding Alice's desires and constraints.We describe a method of calculating Alice's optimal attack that is computationally tractable, and empirically demonstrate its effectiveness compared to random and greedy baselines on synthetic and real-world time series data.We conclude by discussing defensive strategies in the face of Alice-like adversaries.

Downloads

Published

2016-02-21

How to Cite

Alfeld, S., Zhu, X., & Barford, P. (2016). Data Poisoning Attacks against Autoregressive Models. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10237

Issue

Section

Technical Papers: Machine Learning Methods