On the relative value iteration with a risk-sensitive criterion
From MaRDI portal
Publication:4989140
DOI10.4064/bc122-1zbMath1460.90198arXiv1912.08758OpenAlexW3129918376MaRDI QIDQ4989140
Vivek S. Borkar, Aristotle Arapostathis
Publication date: 20 May 2021
Published in: Banach Center Publications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1912.08758
Continuous-time Markov processes on general state spaces (60J25) Sensitivity, stability, well-posedness (49K40) Optimal stochastic control (93E20) Diffusion processes (60J60) Markov and semi-Markov decision processes (90C40)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Risk-sensitive control and an abstract Collatz-Wielandt formula
- Criteria for recurrence and existence of invariant measures for multidimensional diffusions
- Strict monotonicity of principal eigenvalues of elliptic operators in \(\mathbb R^d\) and risk-sensitive control
- Infinite horizon risk-sensitive control of diffusions without any blanket stability assumptions
- A topology for Markov controls
- The generalized principal eigenvalue for Hamilton-Jacobi-Bellman equations of ergodic type
- Average optimality for risk-sensitive control with general state space
- On the structure of solutions of ergodic type Bellman equation related to risk-sensitive control
- Selection theorems and their applications
- Relative Value Iteration for Stochastic Differential Games
- Ergodic Control of Diffusion Processes
- Stability of Markovian processes III: Foster–Lyapunov criteria for continuous-time processes
- A Relative Value Iteration Algorithm for Nondegenerate Controlled Diffusions
- Ergodic Problems for Viscous Hamilton--Jacobi Equations with Inward Drift
- Large Time Behavior of Solutions of Hamilton--Jacobi--Bellman Equations with Quadratic Nonlinearity in Gradients
- Risk-Sensitive Control of Discrete-Time Markov Processes with Infinite Horizon
- Open Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic Control
- A Correction to “A Relative Value Iteration Algorithm for Nondegenerate Controlled Diffusions
- Nonstationary value iteration in controlled Markov chains with risk-sensitive average criterion
- Convergence of the Relative Value Iteration for the Ergodic Control Problem of Nondegenerate Diffusions under Near-Monotone Costs
- Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost
- The Value Iteration Algorithm in Risk-Sensitive Average Markov Decision Chains with Finite State Space
- A Variational Formula for Risk-Sensitive Reward
- A Variational Characterization of the Risk-Sensitive Average Reward for Controlled Diffusions on $\mathbb{R}^d$
This page was built for publication: On the relative value iteration with a risk-sensitive criterion