Linear programming formulation for non-stationary, finite-horizon Markov decision process models
From MaRDI portal
Publication:1728357
DOI10.1016/j.orl.2017.09.001zbMath1409.90215OpenAlexW2755855309MaRDI QIDQ1728357
Jeffrey P. Kharoufeh, Arnab Bhattacharya
Publication date: 22 February 2019
Published in: Operations Research Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.orl.2017.09.001
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A new polynomial-time algorithm for linear programming
- \({\mathcal Q}\)-learning
- An Approximate Dynamic Programming Algorithm for Monotone Value Functions
- Approximate Dynamic Programming
- A Fast Algorithm for Linear Programming
- Dynamic Bid Prices in Revenue Management
- Dynamic Multipriority Patient Scheduling for a Diagnostic Resource
- The Linear Programming Approach to Approximate Dynamic Programming
- The Complexity of Markov Decision Processes
- Linear Programming and Markov Decision Chains
- On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes
- A Distributed Decision-Making Structure for Dynamic Resource Allocation Using Nonlinear Functional Approximations
- On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming
This page was built for publication: Linear programming formulation for non-stationary, finite-horizon Markov decision process models