Learning Heuristic Selection with Dynamic Algorithm Configuration
Keywords:Learning Effective Heuristics And Other Forms Of Control Knowledge, Learning To Improve The Effectiveness Of Planning & Scheduling Systems, Applications That Involve A Combination Of Learning With Planning Or Scheduling
AbstractA key challenge in satisficing planning is to use multiple heuristics within one heuristic search. An aggregation of multiple heuristic estimates, for example by taking the maximum, has the disadvantage that bad estimates of a single heuristic can negatively affect the whole search. Since the performance of a heuristic varies from instance to instance, approaches such as algorithm selection can be successfully applied. In addition, alternating between multiple heuristics during the search makes it possible to use all heuristics equally and improve performance. However, all these approaches ignore the internal search dynamics of a planning system, which can help to select the most useful heuristics for the current expansion step. We show that dynamic algorithm configuration can be used for dynamic heuristic selection which takes into account the internal search dynamics of a planning system. Furthermore, we prove that this approach generalizes over existing approaches and that it can exponentially improve the performance of the heuristic search. To learn dynamic heuristic selection, we propose an approach based on reinforcement learning and show empirically that domain-wise learned policies, which take the internal search dynamics of a planning system into account, can exceed existing approaches.
How to Cite
Speck, D., Biedenkapp, A., Hutter, F., Mattmüller, R., & Lindauer, M. (2021). Learning Heuristic Selection with Dynamic Algorithm Configuration. Proceedings of the International Conference on Automated Planning and Scheduling, 31(1), 597-605. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/16008
Special Track on Planning and Learning