Optimal Search with Neural Networks: Challenges and Approaches

Authors

  • Tianhua Li University of Alberta, Canada
  • Ruimin Chen University of Alberta, Canada
  • Borislav Mavrin University of Alberta, Canada
  • Nathan R. Sturtevant Department of Computing Science, Alberta Machine Intelligence Institute (Amii), University of Alberta, Canada
  • Doron Nadav Ben-Gurion University, Israel
  • Ariel Felner Ben-Gurion University, Israel

DOI:

https://doi.org/10.1609/socs.v15i1.21758

Keywords:

Machine And Deep Learning In Search, Problem Solving Using Search

Abstract

Work in machine learning has grown tremendously in the past years, but has had little to no impact on optimal search approaches. This paper looks at challenges in using deep learning as a part of optimal search, including what is feasible using current public frameworks, and what barriers exist for further adoption. The primary contribution of the paper is to show how to learn admissible heuristics through supervised learning from an existing heuristic. Several approaches are described, with the most successful approach being based on learning a heuristic as a classifier and then adjusting the quantile used with the classifier to ensure heuristic admissibility, which is required for optimal solutions. A secondary contribution is a description of the Batch A* algorithm, which can batch evaluations for more efficient use by the GPU. While ANNs can effectively learn heuristics that produce smaller search trees than alternate compression approaches, there still exists a time overhead when compared to efficient C++ implementations. This point of evaluation points out a challenge for future work.

Downloads

Published

2022-07-17