Robustness Verification of Deep Reinforcement Learning Based Control Systems Using Reward Martingales
DOI:
https://doi.org/10.1609/aaai.v38i18.29976Keywords:
PEAI: Safety, Robustness & TrustworthinessAbstract
Deep Reinforcement Learning (DRL) has gained prominence as an effective approach for control systems. However, its practical deployment is impeded by state perturbations that can severely impact system performance. Addressing this critical challenge requires robustness verification about system performance, which involves tackling two quantitative questions: (i) how to establish guaranteed bounds for expected cumulative rewards, and (ii) how to determine tail bounds for cumulative rewards. In this work, we present the first approach for robustness verification of DRL-based control systems by introducing reward martingales, which offer a rigorous mathematical foundation to characterize the impact of state perturbations on system performance in terms of cumulative rewards. Our verified results provide provably quantitative certificates for the two questions. We then show that reward martingales can be implemented and trained via neural networks, against different types of control policies. Experimental results demonstrate that our certified bounds tightly enclose simulation outcomes on various DRL-based control systems, indicating the effectiveness and generality of the proposed approach.Downloads
Published
2024-03-24
How to Cite
Zhi, D., Wang, P., Chen, C., & Zhang, M. (2024). Robustness Verification of Deep Reinforcement Learning Based Control Systems Using Reward Martingales. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 19992-20000. https://doi.org/10.1609/aaai.v38i18.29976
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI