Infinite Horizon Average Cost Dynamic Programming Subject to Total Variation Distance Ambiguity
DOI10.1137/18M1210514;zbMATH Open1421.93148arXiv1512.06510MaRDI QIDQ5232245FDOQ5232245
Themistoklis Charalambous, I. Tzortzis, Charalambos D. Charalambous
Publication date: 30 August 2019
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1512.06510
Recommendations
- Dynamic programming subject to total variation distance ambiguity
- Infinite Horizon Stochastic Programs
- Infinite horizon programs; convergence of approximate solutions
- Existence and discovery of average optimal solutions in deterministic infinite horizon optimization
- Average Cost Optimal Stationary Policies in Infinite State Markov Decision Processes with Unbounded Costs
- An infinite-horizon multistage dynamic optimization problem
- Average optimality in nonhomogeneous infinite horizon Markov decision processes
- Infinite horizon stochastic optimal control problems with running maximum cost
- Adaptive aggregation methods for infinite horizon dynamic programming
- Infinite-horizon deterministic dynamic programming in discrete time: a monotone convergence principle and a penalty method
dynamic programmingminimaxstochastic controltotal variation distancepolicy iterationinfinite horizonaverage costMarkov control models
Dynamic programming (90C39) Minimax problems in mathematical programming (90C47) Optimal stochastic control (93E20)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- On Choosing and Bounding Probability Metrics
- Title not available (Why is that?)
- Title not available (Why is that?)
- Dynamic programming and stochastic control
- \(H^ \infty\)-optimal control and related minimax design problems. A dynamic game approach.
- Title not available (Why is that?)
- Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey
- Minimum principle for partially observable nonlinear risk-sensitive control problems using measure-valued decompositions
- Risk-sensitive control and dynamic games for partially observed discrete-time nonlinear systems
- Distributionally robust Markov decision processes
- A Finite-Dimensional Risk-Sensitive Control Problem
- On Minimum Cost Per Unit Time Control of Markov Chains
- Another set of conditions for average optimality in Markov control processes
- Minimax optimal control of stochastic uncertain systems with relative entropy constraints
- Finite horizon minimax optimal control of stochastic partially observed time varying uncertain systems
- Stochastic Uncertain Systems Subject to Relative Entropy Constraints: Induced Norms and Monotonicity Properties of Minimax Games
- Control of Markov Chains with Long-Run Average Cost Criterion: The Dynamic Programming Equations
- Extremum Problems With Total Variation Distance and Their Applications
- Distributionally Robust Counterpart in Markov Decision Processes
- Robust MDPs with \(k\)-rectangular uncertainty
- Dynamic Programming Subject to Total Variation Distance Ambiguity
Cited In (1)
This page was built for publication: Infinite Horizon Average Cost Dynamic Programming Subject to Total Variation Distance Ambiguity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5232245)