How to Reduce Action Space for Planning Domains? (Student Abstract)
Keywords:Deterministic Planning, Planning With Markov Models, Reinforcement Learning
AbstractWhile AI planning and Reinforcement Learning (RL) solve sequential decision-making problems, they are based on different formalisms, which leads to a significant difference in their action spaces. When solving planning problems using RL algorithms, we have observed that a naive translation of the planning action space incurs severe degradation in sample complexity. In practice, those action spaces are often engineered manually in a domain-specific manner. In this abstract, we present a method that reduces the parameters of operators in AI planning domains by introducing a parameter seed set problem and casting it as a classical planning task. Our experiment shows that our proposed method significantly reduces the number of actions in the RL environments originating from AI planning domains.
How to Cite
Kokel, H., Lee, J., Katz, M., Sohrabi, S., & Srinivas, K. (2022). How to Reduce Action Space for Planning Domains? (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12989-12990. https://doi.org/10.1609/aaai.v36i11.21631
AAAI Student Abstract and Poster Program