Deep policy dynamic programming for vehicle routing problems

From MaRDI portal
Publication:2170197

DOI10.1007/978-3-031-08011-1_14zbMATH Open1504.90175arXiv2102.11756OpenAlexW3132134635MaRDI QIDQ2170197FDOQ2170197


Authors: Wouter Kool, Herke van Hoof, Max Welling, Joaquim A. S. Gromicho Edit this on Wikidata


Publication date: 30 August 2022

Abstract: Routing problems are a class of combinatorial problems with many practical applications. Recently, end-to-end deep learning methods have been proposed to learn approximate solution heuristics for such problems. In contrast, classical dynamic programming (DP) algorithms guarantee optimal solutions, but scale badly with the problem size. We propose Deep Policy Dynamic Programming (DPDP), which aims to combine the strengths of learned neural heuristics with those of DP algorithms. DPDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions. We evaluate our framework on the travelling salesman problem (TSP), the vehicle routing problem (VRP) and TSP with time windows (TSPTW) and show that the neural policy improves the performance of (restricted) DP algorithms, making them competitive to strong alternatives such as LKH, while also outperforming most other 'neural approaches' for solving TSPs, VRPs and TSPTWs with 100 nodes.


Full work available at URL: https://arxiv.org/abs/2102.11756




Recommendations




Cites Work


Cited In (14)

Uses Software





This page was built for publication: Deep policy dynamic programming for vehicle routing problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2170197)