Goal Recognition as Reinforcement Learning

Authors

  • Leonardo Amado Pontifical Catholic University of Rio Grande do Sul
  • Reuth Mirsky Bar Ilan University The University of Texas at Austin
  • Felipe Meneguzzi University of Aberdeen Pontifical Catholic University of Rio Grande do Sul

DOI:

https://doi.org/10.1609/aaai.v36i9.21198

Keywords:

Planning, Routing, And Scheduling (PRS), Multiagent Systems (MAS), Machine Learning (ML)

Abstract

Most approaches for goal recognition rely on specifications of the possible dynamics of the actor in the environment when pursuing a goal. These specifications suffer from two key issues. First, encoding these dynamics requires careful design by a domain expert, which is often not robust to noise at recognition time. Second, existing approaches often need costly real-time computations to reason about the likelihood of each potential goal. In this paper, we develop a framework that combines model-free reinforcement learning and goal recognition to alleviate the need for careful, manual domain design, and the need for costly online executions. This framework consists of two main stages: Offline learning of policies or utility functions for each potential goal, and online inference. We provide a first instance of this framework using tabular Q-learning for the learning stage, as well as three measures that can be used to perform the inference stage. The resulting instantiation achieves state-of-the-art performance against goal recognizers on standard evaluation domains and superior performance in noisy environments.

Downloads

Published

2022-06-28

How to Cite

Amado, L., Mirsky, R., & Meneguzzi, F. (2022). Goal Recognition as Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9644-9651. https://doi.org/10.1609/aaai.v36i9.21198

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling