TY - JOUR AU - Roijers, Diederik AU - Walraven, Erwin AU - Spaan, Matthijs PY - 2018/06/15 Y2 - 2024/03/29 TI - Bootstrapping LPs in Value Iteration for Multi-Objective and Partially Observable MDPs JF - Proceedings of the International Conference on Automated Planning and Scheduling JA - ICAPS VL - 28 IS - 1 SE - Main Track DO - 10.1609/icaps.v28i1.13903 UR - https://ojs.aaai.org/index.php/ICAPS/article/view/13903 SP - 218-226 AB - <p> Iteratively solving a set of linear programs (LPs) is a common strategy for solving various decision-making problems in Artificial Intelligence, such as planning in multi-objective or partially observable Markov Decision Processes (MDPs). A prevalent feature is that the solutions to these LPs become increasingly similar as the solving algorithm converges, because the solution computed by the algorithm approaches the fixed point of a Bellman backup operator. In this paper, we propose to speed up the solving process of these LPs by bootstrapping based on similar LPs solved previously. We use these LPs to initialize a subset of relevant LP constraints, before iteratively generating the remaining constraints. The resulting algorithm is the first to consider such information sharing across iterations. We evaluate our approach on planning in Multi-Objective MDPs (MOMDPs) and Partially Observable MDPs (POMDPs), showing that it solves fewer LPs than the state of the art, which leads to a significant speed-up. Moreover, for MOMDPs we show that our method scales better in both the number of states and the number of objectives, which is vital for multi-objective planning. </p> ER -