Approximate Dynamic Programming via a Smoothed Linear Program
From MaRDI portal
Publication:4648262
DOI10.1287/OPRE.1120.1044zbMATH Open1251.90379arXiv0908.0506OpenAlexW2222696761MaRDI QIDQ4648262FDOQ4648262
Authors: Vijay V. Desai, Vivek Farias, Ciamac C. Moallemi
Publication date: 8 November 2012
Published in: Operations Research (Search for Journal in Brave)
Abstract: We present a novel linear program for the approximation of the dynamic programming cost-to-go function in high-dimensional stochastic control problems. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program--the `smoothed approximate linear program'--is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. Doing so appears to have several advantages: First, we demonstrate substantially superior bounds on the quality of approximation to the optimal cost-to-go function afforded by our approach. Second, experiments with our approach on a challenging problem (the game of Tetris) show that the approach outperforms the existing LP approach (which has previously been shown to be competitive with several ADP algorithms) by an order of magnitude.
Full work available at URL: https://arxiv.org/abs/0908.0506
Recommendations
- The Linear Programming Approach to Approximate Dynamic Programming
- A linear programming methodology for approximate dynamic programming
- Approximation of dynamic programs
- Approximate linear programming for average cost MDPs
- Perspectives of approximate dynamic programming
- Approximate dynamic programming by practical examples
- Approximate dynamic programming. Solving the curses of dimensionality
- scientific article
- Approximate linear programming for first-order MDPs
optimizationlinear programmingstochastic controlMarkov decision processesapproximate dynamic programming
Cited In (23)
- Relationship between least squares Monte Carlo and approximate linear programming
- Control of diffusions via linear programming
- Shape constraints in economics and operations research
- Network-based approximate linear programming for discrete optimization
- Data-driven optimal control with a relaxed linear program
- Title not available (Why is that?)
- Approximate dynamic programming via iterated Bellman inequalities
- Approximate dynamic programming for stochastic linear control problems on compact state spaces
- A Cost-Shaping Linear Program for Average-Cost Approximate Dynamic Programming with Performance Guarantees
- Quadratic approximate dynamic programming for input-affine systems
- On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming
- New approximate dynamic programming algorithms for large-scale undiscounted Markov decision processes and their application to optimize a production and distribution system
- Nonparametric Approximate Dynamic Programming via the Kernel Method
- Tetris: A study of randomized constraint sampling
- Approximate linear programming for networks: average cost bounds
- State partitioning based linear program for stochastic dynamic programs: an invariance property
- A kernel-based approximate dynamic programming approach: theory and application
- A sequential smoothing algorithm with linear computational cost
- Approximate policy iteration: a survey and some new methods
- A linear programming methodology for approximate dynamic programming
- The Linear Programming Approach to Approximate Dynamic Programming
- Approximate linear programming for average cost MDPs
- Relaxations of Weakly Coupled Stochastic Dynamic Programs
This page was built for publication: Approximate Dynamic Programming via a Smoothed Linear Program
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4648262)