STL-Based Synthesis of Feedback Controllers Using Reinforcement Learning

Authors

  • Nikhil Kumar Singh IIT Kanpur
  • Indranil Saha IIT Kanpur

DOI:

https://doi.org/10.1609/aaai.v37i12.26764

Keywords:

General

Abstract

Deep Reinforcement Learning (DRL) has the potential to be used for synthesizing feedback controllers (agents) for various complex systems with unknown dynamics. These systems are expected to satisfy diverse safety and liveness properties best captured using temporal logic. In RL, the reward function plays a crucial role in specifying the desired behaviour of these agents. However, the problem of designing the reward function for an RL agent to satisfy complex temporal logic specifications has received limited attention in the literature. To address this, we provide a systematic way of generating rewards in real-time by using the quantitative semantics of Signal Temporal Logic (STL), a widely used temporal logic to specify the behaviour of cyber-physical systems. We propose a new quantitative semantics for STL having several desirable properties, making it suitable for reward generation. We evaluate our STL-based reinforcement learning mechanism on several complex continuous control benchmarks and compare our STL semantics with those available in the literature in terms of their efficacy in synthesizing the controller agent. Experimental results establish our new semantics to be the most suitable for synthesizing feedback controllers for complex continuous dynamical systems through reinforcement learning.

Downloads

Published

2023-06-26

How to Cite

Singh, N. K., & Saha, I. (2023). STL-Based Synthesis of Feedback Controllers Using Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15118-15126. https://doi.org/10.1609/aaai.v37i12.26764

Issue

Section

AAAI Special Track on Safe and Robust AI