Learning Robust Search Strategies Using a Bandit-Based Approach

Authors

  • Wei Xia National University of Singapore
  • Roland Yap National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v32i1.12211

Keywords:

constraint satisfaction, CSP, search heuristic, multi-armed bandit, online learning

Abstract

Effective solving of constraint problems often requires choosing good or specific search heuristics. However, choosing or designing a good search heuristic is non-trivial and is often a manual process. In this paper, rather than manually choosing/designing search heuristics, we propose the use of bandit-based learning techniques to automatically select search heuristics. Our approach is online where the solver learns and selects from a set of heuristics during search. The goal is to obtain automatic search heuristics which give robust performance. Preliminary experiments show that our adaptive technique is more robust than the original search heuristics. It can also outperform the original heuristics.

Downloads

Published

2018-04-26

How to Cite

Xia, W., & Yap, R. (2018). Learning Robust Search Strategies Using a Bandit-Based Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12211

Issue

Section

Main Track: Search and Constraint Satisfaction