On Minimum Cost Per Unit Time Control of Markov Chains
From MaRDI portal
Publication:3682367
DOI10.1137/0322062zbMath0566.93069OpenAlexW2061026830MaRDI QIDQ3682367
Publication date: 1984
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/0322062
Markov chains (discrete-time Markov processes on discrete state spaces) (60J10) Optimal stochastic control (93E20) Optimality conditions for problems involving randomness (49K45) Right processes (60J40)
Related Items (21)
A convex analytic approach to Markov decision processes ⋮ A note on the convergence rate of the value iteration scheme in controlled Markov chains ⋮ Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with the Average Reward Criterion ⋮ Unnamed Item ⋮ Optimal controls for diffusion in \(R^ d\)- a min-max max-min formula for the minimal cost growth rate ⋮ Recent results on conditions for the existence of average optimal stationary policies ⋮ Adaptive control of constrained Markov chains: Criteria and policies ⋮ Nonparametric estimation and adaptive control in a class of finite Markov decision chains ⋮ Average cost Markov decision processes under the hypothesis of Doeblin ⋮ The average cost of Markov chains subject to total variation distance uncertainty ⋮ Ergodic and adaptive control of nearest-neighbor motions ⋮ Functional characterization for average cost Markov decision processes with Doeblin's conditions ⋮ Martingale limit theorem and its application to an ergodic controlled Markov chain ⋮ A note on the vanishing interest rate approach in average Markov decision chains with continuous and bounded costs ⋮ On strong average optimality of Markov decision processes with unbounded costs ⋮ Comparing recent assumptions for the existence of average optimal stationary policies ⋮ A Square Shape of the Graph of Iterates of Multifunctions: A Complete Controllability Result ⋮ On the Minimum Pair Approach for Average Cost Markov Decision Processes with Countable Discrete Action Spaces and Strictly Unbounded Costs ⋮ Infinite Horizon Average Cost Dynamic Programming Subject to Total Variation Distance Ambiguity ⋮ The convergence of value iteration in average cost Markov decision chains ⋮ A note on controlled diffusions on line with time-averaged cost
This page was built for publication: On Minimum Cost Per Unit Time Control of Markov Chains