scientific article
From MaRDI portal
Publication:4039208
zbMath0771.93054MaRDI QIDQ4039208
Onésimo Hernández-Lerma, Myriam Muñoz de Özak
Publication date: 8 August 1993
Full work available at URL: https://eudml.org/doc/27742
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Bellman's principle of optimalityBorel statediscrete-time Markov control processesoptimal cost function
Discrete-time control/observation systems (93C55) Stochastic systems in control theory (general) (93E03) Markov processes (60J99)
Related Items (10)
Transmission power allocation for remote estimation with multi-packet reception capabilities ⋮ Value iteration in average cost Markov control processes on Borel spaces ⋮ Average control of Markov decision processes with Feller transition probabilities and general action spaces ⋮ Average optimality in dynamic programming on Borel spaces -- unbounded costs and controls ⋮ An analysis of transient Markov decision processes ⋮ Limiting optimal discounted-cost control of a class of time-varying stochastic systems ⋮ Stability estimation of some Markov controlled processes ⋮ The average cost optimality equation for Markov control processes on Borel spaces ⋮ Weak conditions for average optimality in Markov control processes ⋮ Partially observable Markov decision processes with partially observable random discount factors
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Finite-state approximations for denumerable state discounted Markov decision processes
- Controlled semi-Markov models - the discounted case
- Stochastic optimal control. The discrete time case
- Markov programming by successive approximations with respect to weighted supremum norms
- Measurable selection theorems for optimization problems
- Value iteration and rolling plans for Markov control processes with unbounded rewards
- Adaptive Markov control processes
- Numerical aspects of monotone approximations in convex stochastic control problems
- Average cost optimal policies for Markov control processes with Borel state space and unbounded costs
- On the set of optimal policies in discrete dynamic programming
- Stability and positive supermartingales
- A counterexample in discounted dynamic programming
- Recurrence conditions for Markov decision processes with Borel state space: A survey
- Density estimation and adaptive control of Markov processes: Average and discounted criteria
- Estimation and control in discounted stochastic dynamic programming
- On Dynamic Programming with Unbounded Rewards
- On optimal policies and martingales in dynamic programming
- Approximations of Dynamic Programs, II
- Ergodic Theorems for Discrete Time Stochastic Systems Using a Stochastic Lyapunov Function
- Error bounds for rolling horizon policies in discrete-time Markov control processes
- Discounted Dynamic Programming
- Optimal Discounted Stochastic Control for Diffusion Processes
This page was built for publication: