TY - JOUR AU - Topper, Noah AU - Atia, George AU - Trivedi, Ashutosh AU - Velasquez, Alvaro PY - 2022/06/13 Y2 - 2024/03/29 TI - Active Grammatical Inference for Non-Markovian Planning JF - Proceedings of the International Conference on Automated Planning and Scheduling JA - ICAPS VL - 32 IS - 1 SE - Planning and Learning Track DO - 10.1609/icaps.v32i1.19853 UR - https://ojs.aaai.org/index.php/ICAPS/article/view/19853 SP - 647-651 AB - Planning in finite stochastic environments is canonically posed as a Markov decision process where the transition and reward structures are explicitly known. Reinforcement learning (RL) lifts the explicitness assumption by working with sampling models instead. Further, with the advent of reward machines, we can relax the Markovian assumption on the reward. Angluin's active grammatical inference algorithm L* has found novel application in explicating reward machines for non-Markovian RL. We propose maintaining the assumption of explicit transition dynamics, but with an implicit non-Markovian reward signal, which must be inferred from experiments. We call this setting non-Markovian planning, as opposed to non-Markovian RL. The proposed approach leverages L* to explicate an automaton structure for the underlying planning objective. We exploit the environment model to learn an automaton faster and integrate it with value iteration to accelerate the planning. We compare against recent non-Markovian RL solutions which leverage grammatical inference, and establish complexity results that illustrate the difference in runtime between grammatical inference in planning and RL settings. ER -