Block-Parallel IDA* for GPUs

Authors

  • Satoru Horie The University of Tokyo
  • Alex Fukunaga The University of Tokyo

DOI:

https://doi.org/10.1609/socs.v8i1.18440

Abstract

We investigate GPU-based parallelization of Iterative-Deepening A* (IDA*). We show that straightforward thread-based parallelization techniques which were previously proposed for massively parallel SIMD processors perform poorly due to warp divergence and load imbalance. We propose Block-Parallel IDA* (BPIDA*), which assigns the search of a subtree to a block (a group of threads with access to fast shared memory) rather than a thread. On the 15-puzzle, BPIDA* on a NVIDIA GRID K520 with 1536 CUDA cores achieves a speedup of 4.98 compared to a highly optimized sequential IDA* implementation on a Xeon E5-2670 core.

Downloads

Published

2021-09-01