Open Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic Control
From MaRDI portal
Publication:5113902
DOI10.1287/STSY.2019.0040zbMATH Open1447.93370OpenAlexW2973929671MaRDI QIDQ5113902FDOQ5113902
Authors: Ari Arapostathis
Publication date: 18 June 2020
Published in: Stochastic Systems (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/stsy.2019.0040
Recommendations
- Convergence of the relative value iteration for the ergodic control problem of nondegenerate diffusions under near-monotone costs
- On Some Open Problems in Optimal Control
- An open problem of optimal control theory
- On the convergence of optimal controls in certain optimization problems
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- On the open-loop solution of linear stochastic optimal control problems
- Convergence of optimal controls in some optimization problems
- Open-loop and closed-loop solvabilities for stochastic linear quadratic optimal control problems
- Open-loop equilibriums for a general class of time-inconsistent stochastic optimal control problems
- Mean-field stochastic linear quadratic optimal control problems: open-loop solvabilities
Markov and semi-Markov decision processes (90C40) Optimal stochastic control (93E20) Networked control (93B70)
Cites Work
- Dynamic programming, Markov chains, and the method of successive approximations
- Relative Value Iteration for Stochastic Differential Games
- Large time asymptotic problems for optimal stochastic control with superlinear cost
- Subgeometric rates of convergence of \(f\)-ergodic strong Markov processes
- Value iteration in average cost Markov control processes on Borel spaces
- Error bounds for rolling horizon policies in discrete-time Markov control processes
- Value iteration and optimization of multiclass queueing networks
- On solutions of mean field games with ergodic cost
- Convergence of the relative value iteration for the ergodic control problem of nondegenerate diffusions under near-monotone costs
- A note on the convergence rate of the value iteration scheme in controlled Markov chains
- Value Iteration in a Class of Communicating Markov Decision Chains with the Average Cost Criterion
- On convergence of value iteration for a class of total cost Markov decision processes
- Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs
Cited In (3)
- On the relative value iteration with a risk-sensitive criterion
- Introduction to the Applied Probability Society’s “Open Problems in Applied Probability” Session at the INFORMS Annual Meeting, Phoenix, Arizona, November 4–7, 2018
- Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs
This page was built for publication: Open Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic Control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5113902)