BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning

Authors

  • Jing Cui University of Chinese Academy of Sciences
  • Yufei Han INRIA
  • Yuzhe Ma Microsoft Azure AI
  • Jianbin Jiao University of Chinese Academy of Sciences
  • Junge Zhang Institute of Automation, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v38i10.29052

Keywords:

ML: Reinforcement Learning, ML: Adversarial Learning & Robustness

Abstract

Backdoor attacks in reinforcement learning (RL) have previously employed intense attack strategies to ensure attack success. However, these methods suffer from high attack costs and increased detectability. In this work, we propose a novel approach, BadRL, which focuses on conducting highly sparse backdoor poisoning efforts during training and testing while maintaining successful attacks. Our algorithm, BadRL, strategically chooses state observations with high attack values to inject triggers during training and testing, thereby reducing the chances of detection. In contrast to the previous methods that utilize sample-agnostic trigger patterns, BadRL dynamically generates distinct trigger patterns based on targeted state observations, thereby enhancing its effectiveness. Theoretical analysis shows that the targeted backdoor attack is always viable and remains stealthy under specific assumptions. Empirical results on various classic RL tasks illustrate that BadRL can substantially degrade the performance of a victim agent with minimal poisoning efforts (0.003% of total training steps) during training and infrequent attacks during testing. Code is available at: https://github.com/7777777cc/code.

Published

2024-03-24

How to Cite

Cui, J., Han, Y., Ma, Y., Jiao, J., & Zhang, J. (2024). BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11687-11694. https://doi.org/10.1609/aaai.v38i10.29052

Issue

Section

AAAI Technical Track on Machine Learning I