Scalable planning with deep neural network learned transition models
From MaRDI portal
Publication:3300670
DOI10.1613/JAIR.1.11829zbMATH Open1476.68249arXiv1904.02873OpenAlexW3043995598MaRDI QIDQ3300670FDOQ3300670
Authors: Ga Wu, Buser Say, Scott Sanner
Publication date: 29 July 2020
Published in: Journal of Artificial Intelligence Research (Search for Journal in Brave)
Abstract: In many real-world planning problems with factored, mixed discrete and continuous state and action spaces such as Reservoir Control, Heating Ventilation, and Air Conditioning, and Navigation domains, it is difficult to obtain a model of the complex nonlinear dynamics that govern state evolution. However, the ubiquity of modern sensors allows us to collect large quantities of data from each of these complex systems and build accurate, nonlinear deep neural network models of their state transitions. But there remains one major problem for the task of control -- how can we plan with deep network learned transition models without resorting to Monte Carlo Tree Search and other black-box transition model techniques that ignore model structure and do not easily extend to mixed discrete and continuous domains? In this paper, we introduce two types of nonlinear planning methods that can leverage deep neural network learned transition models: Hybrid Deep MILP Planner (HD-MILP-Plan) and Tensorflow Planner (TF-Plan). In HD-MILP-Plan, we make the critical observation that the Rectified Linear Unit transfer function for deep networks not only allows faster convergence of model learning, but also permits a direct compilation of the deep network transition model to a Mixed-Integer Linear Program encoding. Further, we identify deep network specific optimizations for HD-MILP-Plan that improve performance over a base encoding and show that we can plan optimally with respect to the learned deep networks. In TF-Plan, we take advantage of the efficiency of auto-differentiation tools and GPU-based computation where we encode a subclass of purely continuous planning problems as Recurrent Neural Networks and directly optimize the actions through backpropagation. We compare both planners and show that TF-Plan is able to approximate the optimal plans found by HD-MILP-Plan in less computation time...
Full work available at URL: https://arxiv.org/abs/1904.02873
Recommendations
- Compact and efficient encodings for planning in factored state and action spaces with learned binarized neural network transition models
- Planning in hybrid relational MDPs
- Scaling up Heuristic Planning with Relational Decision Trees
- Learning control knowledge for forward search planning
- Efficient learning and planning with compressed predictive states
Artificial neural networks and deep learning (68T07) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20)
Cited In (4)
- Classical Planning in Deep Latent Space
- Adaptive path-integral autoencoder: representation learning and planning for dynamical systems
- Locally-connected interrelated network: a forward propagation primitive
- Compact and efficient encodings for planning in factored state and action spaces with learned binarized neural network transition models
This page was built for publication: Scalable planning with deep neural network learned transition models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3300670)