How the Duration of the Learning Period Affects the Performance of Random Gradient Selection Hyper-Heuristics


  • Andrei Lissovoi University of Sheffield
  • Pietro Oliveto University of Sheffield
  • John Alasdair Warwicker Karlsruhe Institute of Technology



Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LeadingOnes in the best expected time achievable with the available heuristics, if sufficiently long learning periods τ are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteristics: Ridge, where the HH has to learn that RLS1 is always the best low-level heuristic, and OneMax, where different low-level heuristics are preferable in different areas of the search space. We rigorously prove that super-linear learning periods τ are required for the HH to achieve optimal expected runtime for Ridge. Conversely, a sub-logarithmic learning period is the best static choice for OneMax, while using super-linear values for τ increases the expected runtime above the asymptotic unary unbiased black box complexity of the problem. We prove that a random gradient HH which automatically adapts the learning period throughout the run has optimal asymptotic expected runtime for both OneMax and Ridge. Additionally, we show experimentally that it outperforms any static learning period for realistic problem sizes.




How to Cite

Lissovoi, A., Oliveto, P., & Warwicker, J. A. (2020). How the Duration of the Learning Period Affects the Performance of Random Gradient Selection Hyper-Heuristics. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03), 2376-2383.



AAAI Technical Track: Heuristic Search and Optimization