Large-State Reinforcement Learning for Hyper-Heuristics

Authors

  • Lucas Kletzander TU Wien
  • Nysret Musliu TU Wien

DOI:

https://doi.org/10.1609/aaai.v37i10.26466

Keywords:

SO: Metareasoning and Metaheuristics, ML: Reinforcement Learning Algorithms, PRS: Applications, SO: Heuristic Search

Abstract

Hyper-heuristics are a domain-independent problem solving approach where the main task is to select effective chains of problem-specific low-level heuristics on the fly for an unseen instance. This task can be seen as a reinforcement learning problem, however, the information available to the hyper-heuristic is very limited, usually leading to very limited state representations. In this work, for the first time we use the trajectory of solution changes for a larger set of features for reinforcement learning in the novel hyper-heuristic LAST-RL (Large-State Reinforcement Learning). Further, we introduce a probability distribution for the exploration case in our epsilon-greedy policy that is based on the idea of Iterated Local Search to increase the chance to sample good chains of low-level heuristics. The benefit of the collaboration of our novel components is shown on the academic benchmark of the Cross Domain Heuristic Challenge 2011 consisting of six different problem domains. Our approach can provide state-of-the-art results on this benchmark where it outperforms recent hyper-heuristics based on reinforcement learning, and also demonstrates high performance on a benchmark of complex real-life personnel scheduling domains.

Downloads

Published

2023-06-26

How to Cite

Kletzander, L., & Musliu, N. (2023). Large-State Reinforcement Learning for Hyper-Heuristics. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 12444-12452. https://doi.org/10.1609/aaai.v37i10.26466

Issue

Section

AAAI Technical Track on Search and Optimization