How to Reduce Action Space for Planning Domains? (Student Abstract)

Authors

  • Harsha Kokel The University of Texas at Dallas
  • Junkyu Lee IBM Research
  • Michael Katz IBM Research
  • Shirin Sohrabi IBM Research
  • Kavitha Srinivas IBM Research

DOI:

https://doi.org/10.1609/aaai.v36i11.21631

Keywords:

Deterministic Planning, Planning With Markov Models, Reinforcement Learning

Abstract

While AI planning and Reinforcement Learning (RL) solve sequential decision-making problems, they are based on different formalisms, which leads to a significant difference in their action spaces. When solving planning problems using RL algorithms, we have observed that a naive translation of the planning action space incurs severe degradation in sample complexity. In practice, those action spaces are often engineered manually in a domain-specific manner. In this abstract, we present a method that reduces the parameters of operators in AI planning domains by introducing a parameter seed set problem and casting it as a classical planning task. Our experiment shows that our proposed method significantly reduces the number of actions in the RL environments originating from AI planning domains.

Downloads

Published

2022-06-28

How to Cite

Kokel, H., Lee, J., Katz, M., Sohrabi, S., & Srinivas, K. (2022). How to Reduce Action Space for Planning Domains? (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12989-12990. https://doi.org/10.1609/aaai.v36i11.21631