Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications

Authors

  • Giuseppe De Giacomo Sapienza University of Rome
  • Luca Iocchi Sapienza University of Rome
  • Marco Favorito Sapienza University of Rome
  • Fabio Patrizi Sapienza University of Rome

DOI:

https://doi.org/10.1609/icaps.v29i1.3549

Abstract

In this work we investigate on the concept of “restraining bolt”, envisioned in Science Fiction. Specifically we introduce a novel problem in AI. We have two distinct sets of features extracted from the world, one by the agent and one by the authority imposing restraining specifications (the “restraining bolt”). The two sets are apparently unrelated since of interest to independent parties, however they both account for (aspects of) the same world. We consider the case in which the agent is a reinforcement learning agent on the first set of features, while the restraining bolt is specified logically using linear time logic on finite traces LTLf/LDLf over the second set of features. We show formally, and illustrate with examples, that, under general circumstances, the agent can learn while shaping its goals to suitably conform (as much as possible) to the restraining bolt specifications.

Downloads

Published

2019-07-05

How to Cite

De Giacomo, G., Iocchi, L., Favorito, M., & Patrizi, F. (2019). Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications. Proceedings of the International Conference on Automated Planning and Scheduling, 29(1), 128-136. https://doi.org/10.1609/icaps.v29i1.3549