Sensitivity of constrained Markov decision processes
From MaRDI portal
Publication:1176864
DOI10.1007/BF02204825zbMath0735.60091WikidataQ59313626 ScholiaQ59313626MaRDI QIDQ1176864
Publication date: 25 June 1992
Published in: Annals of Operations Research (Search for Journal in Brave)
Markov decision processcontinuity propertiesadaptive problemsfinite horizon problemsstationary Markov policies
Integer programming (90C10) Sensitivity, stability, parametric optimization (90C31) Markov and semi-Markov decision processes (90C40) Markov processes (60J99)
Related Items (6)
Asymptotic properties of constrained Markov Decision Processes ⋮ Discounted Cost Markov Decision Processes with a Constraint ⋮ Constrained Semi-Markov decision processes with average rewards ⋮ Constrained cost-coupled stochastic games with independent state processes ⋮ Non-randomized policies for constrained Markov decision processes ⋮ Saddle-point calculation for constrained finite Markov chains
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A convex analytic approach to Markov decision processes
- Adaptive control of constrained Markov chains: Criteria and policies
- On the continuity of the minimum set of a continuous function
- Finite state Markovian decision processes
- Linear Programming and Sequential Decisions
- Optimal priority assignment with hard constraint
- Markov Decision Problems and State-Action Frequencies
- Constrained Undiscounted Stochastic Dynamic Programming
- Estimation and control in discounted stochastic dynamic programming
- Optimal scheduling of interactive and noninteractive traffic in telecommunication systems
- Randomized and Past-Dependent Policies for Markov Decision Processes with Multiple Constraints
- Adaptive control of constrained Markov chains
- Solving stochastic dynamic programming problems by linear programming — An annotated bibliography
- Some Remarks on Finite Horizon Markovian Decision Models
- Controlled Markov chains with constraints.
This page was built for publication: Sensitivity of constrained Markov decision processes