Synthesis from Satisficing and Temporal Goals

Authors

  • Suguman Bansal University of Pennsylvania
  • Lydia Kavraki Rice University
  • Moshe Y. Vardi Rice University
  • Andrew Wells Rice University Tesla

DOI:

https://doi.org/10.1609/aaai.v36i9.21202

Keywords:

Planning, Routing, And Scheduling (PRS)

Abstract

Reactive synthesis from high-level specifications that combine hard constraints expressed in Linear Temporal Logic (LTL) with soft constraints expressed by discounted sum (DS) rewards has applications in planning and reinforcement learning. An existing approach combines techniques from LTL synthesis with optimization for the DS rewards but has failed to yield a sound algorithm. An alternative approach combining LTL synthesis with satisficing DS rewards (rewards that achieve a threshold) is sound and complete for integer discount factors, but, in practice, a fractional discount factor is desired. This work extends the existing satisficing approach, presenting the first sound algorithm for synthesis from LTL and DS rewards with fractional discount factors. The utility of our algorithm is demonstrated on robotic planning domains.

Downloads

Published

2022-06-28

How to Cite

Bansal, S., Kavraki, L., Vardi, M. Y., & Wells, A. (2022). Synthesis from Satisficing and Temporal Goals. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9679-9686. https://doi.org/10.1609/aaai.v36i9.21202

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling