On the policy iteration algorithm for nondegenerate controlled diffusions under the ergodic criterion
DOI10.1007/978-0-8176-8337-5_1zbMATH Open1374.93383OpenAlexW330571231WikidataQ60167490 ScholiaQ60167490MaRDI QIDQ4593597FDOQ4593597
Authors: Ari Arapostathis
Publication date: 22 November 2017
Published in: Systems & Control: Foundations & Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-0-8176-8337-5_1
Recommendations
- A relative value iteration algorithm for nondegenerate controlled diffusions
- On the policy improvement algorithm for ergodic risk-sensitive control
- Policy iteration algorithm for singular controlled diffusion processes
- Convergence of the relative value iteration for the ergodic control problem of nondegenerate diffusions under near-monotone costs
- A correction to ``A relative value iteration algorithm for nondegenerate controlled diffusions
Diffusion processes (60J60) Markov and semi-Markov decision processes (90C40) Optimal stochastic control (93E20)
Cited In (6)
- On the policy improvement algorithm for ergodic risk-sensitive control
- A relative value iteration algorithm for nondegenerate controlled diffusions
- On Iteration Improvement for Averaged Expected Cost Control for One-Dimensional Ergodic Diffusions
- On averaged control and iteration improvement for a class of multidimensional ergodic diffusions
- A correction to ``A relative value iteration algorithm for nondegenerate controlled diffusions
- Policy iteration algorithm for singular controlled diffusion processes
This page was built for publication: On the policy iteration algorithm for nondegenerate controlled diffusions under the ergodic criterion
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4593597)