Robustness to Incorrect System Models in Stochastic Control
From MaRDI portal
Publication:5111064
DOI10.1137/18M1208058zbMATH Open1441.93342arXiv1803.06046OpenAlexW3020371095MaRDI QIDQ5111064FDOQ5111064
Ali Devran Kara, Serdar Yüksel
Publication date: 26 May 2020
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Abstract: In stochastic control applications, typically only an ideal model (controlled transition kernel) is assumed and the control design is based on the given model, raising the problem of performance loss due to the mismatch between the assumed model and the actual model. Toward this end, we study continuity properties of discrete-time stochastic control problems with respect to system models (i.e., controlled transition kernels) and robustness of optimal control policies designed for incorrect models applied to the true system. We study both fully observed and partially observed setups under an infinite horizon discounted expected cost criterion. We show that continuity and robustness cannot be established under weak and setwise convergences of transition kernels in general, but that the expected induced cost is robust under total variation. By imposing further assumptions on the measurement models and on the kernel itself (such as continuous convergence), we show that the optimal cost can be made continuous under weak convergence of transition kernels as well. Using these continuity properties, we establish convergence results and error bounds due to mismatch that occurs by the application of a control policy which is designed for an incorrectly estimated system model to a true model, thus establishing positive and negative results on robustness.Compared to the existing literature, we obtain strictly refined robustness results that are applicable even when the incorrect models can be investigated under weak convergence and setwise convergence criteria (with respect to a true model), in addition to the total variation criteria. These entail positive implications on empirical learning in (data-driven) stochastic control since often system models are learned through empirical training data where typically weak convergence criterion applies but stronger convergence criteria do not.
Full work available at URL: https://arxiv.org/abs/1803.06046
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Real Analysis and Probability
- Optimal Transport
- Forward-backward stochastic differential games and stochastic control under model uncertainty
- Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games
- Stochastic optimal control. The discrete time case
- Bayesian nonparametrics
- Ambiguous chance constrained problems and robust optimization
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Average Optimality in Markov Control Processes via Discounted-Cost Problems and Linear Programming
- \(H^ \infty\)-optimal control and related minimax design problems. A dynamic game approach.
- Robust Control of Markov Decision Processes with Uncertain Transition Matrices
- Robust Dynamic Programming
- Connections between stochastic control and dynamic games
- Statistical Methods in Markov Chains
- Accelerating the convergence of value iteration by using partial transition functions
- Nonparametric Estimation of Conditional Distributions
- Robustness and risk-sensitive filtering
- Convergence of Dynamic Programming Models
- Robust Markov Decision Processes
- Robust H∞ infinity control in the presence of stochastic uncertainty
- On the Asymptotic Optimality of Finite Approximations to Markov Decision Processes with Borel Spaces
- Partially observable total-cost Markov decision processes with weakly continuous transition probabilities
- Markov--Nash Equilibria in Mean-Field Games with Discounted Cost
- Entropy bounds on Bayesian learning
- Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
- Convergence analysis for distributionally robust optimization and equilibrium problems
- Minimax optimal control of stochastic uncertain systems with relative entropy constraints
- Robust properties of risk-sensitive control
- White-Noise Representations in Stochastic Realization Theory
- Robust sensitivity analysis for stochastic systems
- Quantifying Distributional Model Risk via Optimal Transport
- Finite approximations in discrete-time stochastic control. Quantized models and asymptotic optimality
- Near optimality of quantized policies in stochastic control under weak continuity conditions
- How Does the Value Function of a Markov Decision Process Depend on the Transition Probabilities?
- On the sample complexity of the linear quadratic regulator
- Optimization and convergence of observation channels in stochastic control
- Robustness to Incorrect Priors in Partially Observed Stochastic Control
- On robustness of discrete time optimal filters
- Weak Feller property of non-linear filters
- Optimal Approximation Schedules for a Class of Iterative Algorithms, With an Application to Multigrid Value Iteration
- Dynamic Programming Subject to Total Variation Distance Ambiguity
Cited In (15)
- Model‐system parameter mismatch in GPC control
- Q-learning in regularized mean-field games
- Robustness to incorrect models and data-driven learning in average-cost optimal stochastic control
- Regularized stochastic team problems
- A robustness result for stochastic control
- Robustness to Incorrect Priors in Partially Observed Stochastic Control
- Robustness to Approximations and Model Learning in MDPs and POMDPs
- Geometry of information structures, strategic measures and associated stochastic control topologies
- Continuity of discounted values and the structure of optimal policies for <scp>periodic‐review</scp> inventory systems with setup costs
- Average cost optimality of partially observed MDPs: contraction of nonlinear filters and existence of optimal solutions and approximations
- Reinforcement Learning for Linear-Convex Models with Jumps via Stability Analysis of Feedback Controls
- Control plans in models with classification errors
- Regularity and Stability of Feedback Relaxed Controls
- Another look at partially observed optimal stochastic control: existence, ergodicity, and approximations without belief-reduction
- Evaluating the adequacy of models of controlled dynamic systems
This page was built for publication: Robustness to Incorrect System Models in Stochastic Control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5111064)