Stochastic Planning with Lifted Symbolic Trajectory Optimization


  • Hao Cui Tufts University
  • Thomas Keller University of Basel
  • Roni Khardon Indiana University, Bloomington


This paper investigates online stochastic planning for problems with large factored state and action spaces. One promising approach in recent work estimates the quality of applicable actions in the current state through aggregate simulation from the states they reach. This leads to significant speedup, compared to search over concrete states and actions, and suffices to guide decision making in cases where the performance of a random policy is informative of the quality of a state. The paper makes two significant improvements to this approach. The first, taking inspiration from lifted belief propagation, exploits the structure of the problem to derive a more compact computation graph for aggregate simulation. The second improvement replaces the random policy embedded in the computation graph with symbolic variables that are optimized simultaneously with the search for high quality actions. This expands the scope of the approach to problems that require deep search and where information is lost quickly with random steps. An empirical evaluation shows that these ideas significantly improve performance, leading to state of the art performance on hard planning problems.




How to Cite

Cui, H., Keller, T., & Khardon, R. (2021). Stochastic Planning with Lifted Symbolic Trajectory Optimization. Proceedings of the International Conference on Automated Planning and Scheduling, 29(1), 119-127. Retrieved from