A Policy Improvement Method in Constrained Stochastic Dynamic Programming
From MaRDI portal
Publication:5281937
DOI10.1109/TAC.2006.880801zbMATH Open1366.90213MaRDI QIDQ5281937FDOQ5281937
Authors: Hyeong Soo Chang
Publication date: 27 July 2017
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Recommendations
- A policy improvement method for constrained average Markov decision processes
- Random search for constrained Markov decision processes with multi-policy improvement
- A stochastic improvement method for stochastic programming
- Improving the performance of stochastic dual dynamic programming
- Constrained Undiscounted Stochastic Dynamic Programming
- On the policy improvement algorithm in continuous time
- Computing average optimal constrained policies in stochastic dynamic programming.
- An efficient policy iteration algorithm for dynamic programming equations
Cited In (9)
- On Bellman's principle with inequality constraints
- A Class of Decision Processes Showing Policy-Improvement/Newton–Raphson Equivalence
- An exact iterative search algorithm for constrained Markov decision processes
- A policy iteration heuristic for constrained discounted controlled Markov chains
- Constrained Markov decision processes in Borel spaces: from discounted to average optimality
- Random search for constrained Markov decision processes with multi-policy improvement
- Resource-constrained management of heterogeneous assets with stochastic deterioration
- Improved order 1/4 convergence for piecewise constant policy approximation of stochastic control problems
- A policy improvement method for constrained average Markov decision processes
This page was built for publication: A Policy Improvement Method in Constrained Stochastic Dynamic Programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5281937)