Active Grammatical Inference for Non-Markovian Planning
DOI:
https://doi.org/10.1609/icaps.v32i1.19853Keywords:
Non-Markovian Planning, Regular Decision Processes, Grammatical Inference, Non-Markovian Reward, Markov Decision ProcessesAbstract
Planning in finite stochastic environments is canonically posed as a Markov decision process where the transition and reward structures are explicitly known. Reinforcement learning (RL) lifts the explicitness assumption by working with sampling models instead. Further, with the advent of reward machines, we can relax the Markovian assumption on the reward. Angluin's active grammatical inference algorithm L* has found novel application in explicating reward machines for non-Markovian RL. We propose maintaining the assumption of explicit transition dynamics, but with an implicit non-Markovian reward signal, which must be inferred from experiments. We call this setting non-Markovian planning, as opposed to non-Markovian RL. The proposed approach leverages L* to explicate an automaton structure for the underlying planning objective. We exploit the environment model to learn an automaton faster and integrate it with value iteration to accelerate the planning. We compare against recent non-Markovian RL solutions which leverage grammatical inference, and establish complexity results that illustrate the difference in runtime between grammatical inference in planning and RL settings.Downloads
Published
2022-06-13
How to Cite
Topper, N., Atia, G., Trivedi, A., & Velasquez, A. (2022). Active Grammatical Inference for Non-Markovian Planning. Proceedings of the International Conference on Automated Planning and Scheduling, 32(1), 647-651. https://doi.org/10.1609/icaps.v32i1.19853
Issue
Section
Planning and Learning Track