Robust Markov control processes
From MaRDI portal
Recommendations
Cites work
- scientific article; zbMATH DE number 3906790 (Why is no real title available?)
- scientific article; zbMATH DE number 4006011 (Why is no real title available?)
- scientific article; zbMATH DE number 46153 (Why is no real title available?)
- scientific article; zbMATH DE number 1233798 (Why is no real title available?)
- scientific article; zbMATH DE number 1325008 (Why is no real title available?)
- scientific article; zbMATH DE number 1134975 (Why is no real title available?)
- scientific article; zbMATH DE number 3222422 (Why is no real title available?)
- scientific article; zbMATH DE number 3245885 (Why is no real title available?)
- scientific article; zbMATH DE number 3186512 (Why is no real title available?)
- A New Condition and Approach for Zero-Sum Stochastic Games with Average Payoffs
- A Uniform Tauberian Theorem in Dynamic Programming
- A counterexample on the optimality equation in Markov decision chains with the average cost criterion
- Average Cost Optimal Stationary Policies in Infinite State Markov Decision Processes with Unbounded Costs
- Average Optimality in Dynamic Programming with General State Space
- Average optimality for risk-sensitive control with general state space
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Discounted Dynamic Programming
- Existence of risk-sensitive optimal stationary policies for controlled Markov processes
- Invariant problems in dynamic programming - average reward criterion
- MINIMAX STRATEGIES FOR AVERAGE COST STOCHASTIC GAMES WITH AN APPLICATION TO INVENTORY MODELS
- Measurable selections of extrema
- Minimax Control of Discrete-Time Stochastic Systems
- On Markov Games with Average Reward Criterion and Weakly Continuous Transition Probabilities
- Optimal Stationary Policies in General State Space Markov Decision Chains with Finite Action Sets
- Optimal strategies in a class of zero-sum ergodic stochastic games
- Percentile Optimization for Markov Decision Processes with Parameter Uncertainty
- Robust Control of Markov Decision Processes with Uncertain Transition Matrices
- Robust Dynamic Programming
- Stochastic games with unbounded payoffs: applications to robust control in economics
- Stochastic optimal control. The discrete time case
- Two characterizations of optimality in dynamic programming
- Zero-Sum Ergodic Stochastic Games with Feller Transition Probabilities
- Zero-sum stochastic games with unbounded costs: Discounted and average cost cases
Cited in
(9)- Q-learning for distributionally robust Markov decision processes
- Robust Markov Decision Processes
- Stochastic games with unbounded payoffs: applications to robust control in economics
- Markov decision processes with risk-sensitive criteria: an overview
- Distributionally Robust Markov Decision Processes and Their Connection to Risk Measures
- Discounted robust control for Markov diffusion processes
- Robust Control of Markov Decision Processes with Uncertain Transition Matrices
- Approximation of discounted minimax Markov control problems and zero-sum Markov games using Hausdorff and Wasserstein distances
- Zero-sum average cost semi-Markov games with weakly continuous transition probabilities and a minimax semi-Markov inventory problem
This page was built for publication: Robust Markov control processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q401072)