Robustness to Incorrect Priors in Partially Observed Stochastic Control
From MaRDI portal
Publication:5232210
DOI10.1137/17M1157660;zbMath1421.93042arXiv1803.05103MaRDI QIDQ5232210
Serdar Yüksel, Ali Devran Kara
Publication date: 30 August 2019
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1803.05103
Sensitivity (robustness) (93B35) Optimal stochastic control (93E20) Stochastic systems in control theory (general) (93E03)
Related Items
Continuity Properties of Value Functions in Information Structures for Zero-Sum and General Games and Stochastic Teams ⋮ Robustness to Incorrect Priors and Controlled Filter Stability in Partially Observed Stochastic Control ⋮ Robustness to Incorrect System Models in Stochastic Control ⋮ Q-learning in regularized mean-field games ⋮ Robustness to Approximations and Model Learning in MDPs and POMDPs ⋮ Regularized stochastic team problems ⋮ Convex analytic method revisited: further optimality results and performance of deterministic policies in average cost stochastic control
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Uniform and universal Glivenko-Cantelli classes
- Near optimality of quantized policies in stochastic control under weak continuity conditions
- Discrete time nonlinear filters with informative observations are stable
- Stochastic optimal control. The discrete time case
- Incomplete information in Markovian decision models
- Connections between stochastic control and dynamic games
- Exponential stability of discrete-time filters for bounded observation noise
- Exponential stability in discrete-time filtering for non-ergodic signal.
- Adaptive Markov control processes
- Markov chains and invariant probabilities
- Robust properties of risk-sensitive control
- The universal Glivenko-Cantelli property
- Forward-backward stochastic differential games and stochastic control under model uncertainty
- Entropy bounds on Bayesian learning
- Partially Observable Total-Cost Markov Decision Processes with Weakly Continuous Transition Probabilities
- Robust Sensitivity Analysis for Stochastic Systems
- Optimization and Convergence of Observation Channels in Stochastic Control
- Empirical Processes, Typical Sequences, and Coordinated Actions in Standard Borel Spaces
- Feedback and optimal sensitivity: Model reference transformations, multiplicative seminorms, and approximate inverses
- Convergence of Dynamic Programming Models
- A risk-sensitive maximum principle: the case of imperfect state observation
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Reduction of a Controlled Markov Model with Incomplete Data to a Problem with Complete Information in the Case of Borel State and Control Space
- Uniform Central Limit Theorems
- Minimax optimal control of stochastic uncertain systems with relative entropy constraints
- Real Analysis and Probability
- Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games
- Functional Properties of Minimum Mean-Square Error and Mutual Information
- Stochastic Uncertain Systems Subject to Relative Entropy Constraints: Induced Norms and Monotonicity Properties of Minimax Games
- On the Existence of Optimal Policies for a Class of Static and Sequential Dynamic Teams
- Dynamic Programming Subject to Total Variation Distance Ambiguity
- Uniformity in weak convergence
- \(H^ \infty\)-optimal control and related minimax design problems. A dynamic game approach.