Robustness to incorrect models and data-driven learning in average-cost optimal stochastic control
From MaRDI portal
Publication:2116649
DOI10.1016/J.AUTOMATICA.2022.110179zbMATH Open1485.93634arXiv2003.05769OpenAlexW3039633551MaRDI QIDQ2116649FDOQ2116649
Authors: Ali Devran Kara, Maxim Raginsky, Serdar Yüksel
Publication date: 18 March 2022
Published in: Automatica (Search for Journal in Brave)
Abstract: We study continuity and robustness properties of infinite-horizon average expected cost problems with respect to (controlled) transition kernels, and applications of these results to the problem of robustness of control policies designed for approximate models applied to actual systems. We show that sufficient conditions presented in the literature for discounted-cost problems are in general not sufficient to ensure robustness for average-cost problems. However, we show that the average optimal cost is continuous in the convergences of controlled transition kernel models where convergence of models entails (i) continuous weak convergence in state and actions, and (ii) continuous setwise convergence in the actions for every fixed state variable, in addition to either uniform ergodicity or some regularity conditions. We establish that the mismatch error due to the application of a control policy designed for an incorrectly estimated model to the true model decreases to zero as the incorrect model approaches the true model under the stated convergence criteria. Our findings significantly relax related studies in the literature which have primarily considered the more restrictive total variation convergence criteria. Applications to robustness to models estimated through empirical data (where almost sure weak convergence criterion typically holds, but stronger criteria do not) are studied and conditions for asymptotic robustness to data-driven learning are established.
Full work available at URL: https://arxiv.org/abs/2003.05769
Recommendations
- Robustness to incorrect system models in stochastic control
- Robustness to approximations and model learning in MDPs and POMDPs
- Robustness to Incorrect Priors in Partially Observed Stochastic Control
- Stochastic Control with Imperfect Models
- Infinite Horizon Average Cost Dynamic Programming Subject to Total Variation Distance Ambiguity
Cites Work
- Title not available (Why is that?)
- Real Analysis and Probability
- Title not available (Why is that?)
- Uniform Central Limit Theorems
- Title not available (Why is that?)
- A Useful Convergence Theorem for Probability Distributions
- Title not available (Why is that?)
- Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games
- Dynamic programming and stochastic control
- Bayesian nonparametrics
- Ambiguous chance constrained problems and robust optimization
- Connections between stochastic control and dynamic games
- Adaptive Markov control processes
- Robust H/sub infinity / control for linear systems with norm-bounded time-varying uncertainty
- Stability of a 4th-order curvature condition arising in optimal transport theory
- Convergence of Dynamic Programming Models
- Approximation of average cost Markov decision processes using empirical distributions and concentration inequalities
- Uniform and universal Glivenko-Cantelli classes
- Discrete Time Stochastic Adaptive Control
- Title not available (Why is that?)
- Title not available (Why is that?)
- Robust H∞ infinity control in the presence of stochastic uncertainty
- Sequential decisions under uncertainty and the maximum theorem
- Recurrence conditions for Markov decision processes with Borel state space: A survey
- The universal Glivenko-Cantelli property
- Convergence analysis for distributionally robust optimization and equilibrium problems
- Minimax optimal control of stochastic uncertain systems with relative entropy constraints
- Robust properties of risk-sensitive control
- Adapted Wasserstein distances and stability in mathematical finance
- A Universal Empirical Dynamic Programming Algorithm for Continuous State MDPs
- Stochastic Control with Imperfect Models
- Robustness to incorrect system models in stochastic control
- Empirical dynamic programming
- Analyticity, Convergence, and Convergence Rate of Recursive Maximum-Likelihood Estimation in Hidden Markov Models
- Stability of optimal filter higher-order derivatives
- Exponential filter stability via Dobrushin's coefficient
- All adapted topologies are equal
- On robustness of discrete time optimal filters
- Empirical Processes, Typical Sequences, and Coordinated Actions in Standard Borel Spaces
Cited In (3)
This page was built for publication: Robustness to incorrect models and data-driven learning in average-cost optimal stochastic control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2116649)