Robust Markov control processes
DOI10.1016/J.JMAA.2014.06.028zbMATH Open1298.49037OpenAlexW2074999860MaRDI QIDQ401072FDOQ401072
Authors: Anna Jaskiewicz, Andrzej S. Nowak
Publication date: 26 August 2014
Published in: Journal of Mathematical Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmaa.2014.06.028
Recommendations
robust controlFatou's lemmaaverage minimax control problemgeneralised Tauberian relationoptimality inequality
Discrete-time Markov processes on general state spaces (60J05) Sensitivity (robustness) (93B35) Optimality conditions for minimax problems (49K35) Optimality conditions for problems involving randomness (49K45) Optimal stochastic control (93E20)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Stochastic optimal control. The discrete time case
- A Uniform Tauberian Theorem in Dynamic Programming
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Measurable selections of extrema
- Robust Control of Markov Decision Processes with Uncertain Transition Matrices
- Robust Dynamic Programming
- Title not available (Why is that?)
- Average Cost Optimal Stationary Policies in Infinite State Markov Decision Processes with Unbounded Costs
- Title not available (Why is that?)
- A New Condition and Approach for Zero-Sum Stochastic Games with Average Payoffs
- Minimax Control of Discrete-Time Stochastic Systems
- Discounted Dynamic Programming
- Percentile Optimization for Markov Decision Processes with Parameter Uncertainty
- Average Optimality in Dynamic Programming with General State Space
- Existence of risk-sensitive optimal stationary policies for controlled Markov processes
- Average optimality for risk-sensitive control with general state space
- Zero-Sum Ergodic Stochastic Games with Feller Transition Probabilities
- On Markov Games with Average Reward Criterion and Weakly Continuous Transition Probabilities
- Invariant problems in dynamic programming - average reward criterion
- A counterexample on the optimality equation in Markov decision chains with the average cost criterion
- Two characterizations of optimality in dynamic programming
- Optimal strategies in a class of zero-sum ergodic stochastic games
- Title not available (Why is that?)
- Title not available (Why is that?)
- MINIMAX STRATEGIES FOR AVERAGE COST STOCHASTIC GAMES WITH AN APPLICATION TO INVENTORY MODELS
- Optimal Stationary Policies in General State Space Markov Decision Chains with Finite Action Sets
- Zero-sum stochastic games with unbounded costs: Discounted and average cost cases
- Stochastic games with unbounded payoffs: applications to robust control in economics
Cited In (9)
- Zero-sum average cost semi-Markov games with weakly continuous transition probabilities and a minimax semi-Markov inventory problem
- Distributionally Robust Markov Decision Processes and Their Connection to Risk Measures
- Stochastic games with unbounded payoffs: applications to robust control in economics
- Robust Control of Markov Decision Processes with Uncertain Transition Matrices
- Approximation of discounted minimax Markov control problems and zero-sum Markov games using Hausdorff and Wasserstein distances
- Discounted robust control for Markov diffusion processes
- Q-learning for distributionally robust Markov decision processes
- Robust Markov Decision Processes
- Markov decision processes with risk-sensitive criteria: an overview
This page was built for publication: Robust Markov control processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q401072)