Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems

Authors

  • Yuzhe Ma University of Wisconsin-Madison
  • Jon A Sharp University of Wisconsin- Madison
  • Ruizhe Wang University of Wisconsin-Madison
  • Earlence Fernandes University of Wisconsin-Madison
  • Xiaojin Zhu University of Wisconsin-Madison

DOI:

https://doi.org/10.1609/aaai.v35i10.17073

Keywords:

Adversarial Learning & Robustness

Abstract

Kalman Filter (KF) is widely used in various domains to perform sequential learning or variable estimation. In the context of autonomous vehicles, KF constitutes the core component of many Advanced Driver Assistance Systems (ADAS), such as Forward Collision Warning (FCW). It tracks the states (distance, velocity etc.) of relevant traffic objects based on sensor measurements. The tracking output of KF is often fed into downstream logic to produce alerts, which will then be used by human drivers to make driving decisions in near-collision scenarios. In this paper, we study adversarial attacks on KF as part of the more complex machine-human hybrid system of Forward Collision Warning. Our attack goal is to negatively affect human braking decisions by causing KF to output incorrect state estimations that lead to false or delayed alerts. We accomplish this by sequentially manipulating measure ments fed into the KF, and propose a novel Model Predictive Control (MPC) approach to compute the optimal manipulation. Via experiments conducted in a simulated driving environment, we show that the attacker is able to successfully change FCW alert signals through planned manipulation over measurements prior to the desired target time. These results demonstrate that our attack can stealthily mislead a distracted human driver and cause vehicle collisions.

Downloads

Published

2021-05-18

How to Cite

Ma, Y., Sharp, J. A., Wang, R., Fernandes, E., & Zhu, X. (2021). Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8865-8873. https://doi.org/10.1609/aaai.v35i10.17073

Issue

Section

AAAI Technical Track on Machine Learning III