How to Reduce Action Space for Planning Domains? (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v36i11.21631Keywords:
Deterministic Planning, Planning With Markov Models, Reinforcement LearningAbstract
While AI planning and Reinforcement Learning (RL) solve sequential decision-making problems, they are based on different formalisms, which leads to a significant difference in their action spaces. When solving planning problems using RL algorithms, we have observed that a naive translation of the planning action space incurs severe degradation in sample complexity. In practice, those action spaces are often engineered manually in a domain-specific manner. In this abstract, we present a method that reduces the parameters of operators in AI planning domains by introducing a parameter seed set problem and casting it as a classical planning task. Our experiment shows that our proposed method significantly reduces the number of actions in the RL environments originating from AI planning domains.Downloads
Published
2022-06-28
How to Cite
Kokel, H., Lee, J., Katz, M., Sohrabi, S., & Srinivas, K. (2022). How to Reduce Action Space for Planning Domains? (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12989-12990. https://doi.org/10.1609/aaai.v36i11.21631
Issue
Section
AAAI Student Abstract and Poster Program