Evaluation of a simple, scalable, parallel best-first search strategy

From MaRDI portal
Publication:360118

DOI10.1016/J.ARTINT.2012.10.007zbMATH Open1270.68271arXiv1201.3204OpenAlexW1965334268MaRDI QIDQ360118FDOQ360118


Authors: Akihiro Kishimoto, Adi Botea, Alex Fukunaga Edit this on Wikidata


Publication date: 26 August 2013

Published in: Artificial Intelligence (Search for Journal in Brave)

Abstract: Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate Hash-Distributed A* (HDA*), a simple approach to parallel best-first search that asynchronously distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in an optimal sequential version of the Fast Downward planner, as well as a 24-puzzle solver. The scaling behavior of HDA* is evaluated experimentally on a shared memory, multicore machine with 8 cores, a cluster of commodity machines using up to 64 cores, and large-scale high-performance clusters, using up to 2400 processors. We show that this approach scales well, allowing the effective utilization of large amounts of distributed memory to optimally solve problems which require terabytes of RAM. We also compare HDA* to Transposition-table Driven Scheduling (TDS), a hash-based parallelization of IDA*, and show that, in planning, HDA* significantly outperforms TDS. A simple hybrid which combines HDA* and TDS to exploit strengths of both algorithms is proposed and evaluated.


Full work available at URL: https://arxiv.org/abs/1201.3204




Recommendations




Cited In (7)





This page was built for publication: Evaluation of a simple, scalable, parallel best-first search strategy

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q360118)