Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v40i31.39809Abstract
Deep Reinforcement Learning (DRL) systems are increasingly used in safety-critical applications, yet their security remains severely underexplored. This work investigates backdoor attacks, which implant hidden triggers that cause malicious actions only when specific inputs appear in the observation space. Existing DRL backdoor research focuses solely on training-time attacks requiring full adversarial access to the training pipeline. In contrast, we reveal critical vulnerabilities across the DRL supply chain where backdoors can be embedded with significantly reduced adversarial privileges. We introduce two novel attacks: (1) TrojanentRL, which exploits component-level flaws to implant a persistent backdoor that survives full model retraining; and (2) InfrectroRL, a post-training backdoor attack which requires no access to training, validation, or test data. Empirical and analytical evaluations across six Atari environments show our attacks rival state-of-the-art training-time backdoor attacks while operating under much stricter adversarial constraints. We also demonstrate that InfrectroRL further evades two leading DRL backdoor defenses. These findings challenge the current research focus and highlight the urgent need for robust defenses.Published
2026-03-14
How to Cite
Vyas, S., Caron, A., Hicks, C., Burnap, P., & Mavroudis, V. (2026). Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(31), 26072–26080. https://doi.org/10.1609/aaai.v40i31.39809
Issue
Section
AAAI Technical Track on Machine Learning VIII