Towards Safe AI: Sandboxing DNNs-Based Controllers in Stochastic Games

Authors

  • Bingzhuo Zhong Technical University of Munich
  • Hongpeng Cao Technical University of Munich
  • Majid Zamani University of Colorado Boulder
  • Marco Caccamo Technical University of Munich

DOI:

https://doi.org/10.1609/aaai.v37i12.26789

Keywords:

General

Abstract

Nowadays, AI-based techniques, such as deep neural networks (DNNs), are widely deployed in autonomous systems for complex mission requirements (e.g., motion planning in robotics). However, DNNs-based controllers are typically very complex, and it is very hard to formally verify their correctness, potentially causing severe risks for safety-critical autonomous systems. In this paper, we propose a construction scheme for a so-called Safe-visor architecture to sandbox DNNs-based controllers. Particularly, we consider the construction under a stochastic game framework to provide a system-level safety guarantee which is robust to noises and disturbances. A supervisor is built to check the control inputs provided by a DNNs-based controller and decide whether to accept them. Meanwhile, a safety advisor is running in parallel to provide fallback control inputs in case the DNN-based controller is rejected. We demonstrate the proposed approaches on a quadrotor employing an unverified DNNs-based controller.

Downloads

Published

2023-06-26

How to Cite

Zhong, B., Cao, H., Zamani, M., & Caccamo, M. (2023). Towards Safe AI: Sandboxing DNNs-Based Controllers in Stochastic Games. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15340-15349. https://doi.org/10.1609/aaai.v37i12.26789

Issue

Section

AAAI Special Track on Safe and Robust AI