Long Run Control with Degenerate Observation
From MaRDI portal
Publication:3119791
DOI10.1137/18M1196844zbMath1409.93073OpenAlexW2921552466WikidataQ128220187 ScholiaQ128220187MaRDI QIDQ3119791
Publication date: 13 March 2019
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/18m1196844
Filtering in stochastic control theory (93E11) Discrete-time Markov processes on general state spaces (60J05) Optimal stochastic control (93E20)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A complete solution to Blackwell's unique ergodicity problem for hidden Markov chains
- A simple proof of Kaijser's unique ergodicity result for hidden Markov \(\alpha\)-chains
- A limit theorem for partially observed Markov chains
- Adaptive Markov control processes
- On Markov chains induced by partitioned transition probability matrices
- On the existence of stationary optimal policies for partially observed MDPs under the long-run average cost criterion
- Ergodicity of hidden Markov models
- On the average cost optimality equation and the structure of optimal policies for partially observable Markov decision processes
- On convergence in distribution of the Markov chain generated by the filter kernel induced by a fully dominated Hidden Markov Model
- A Topological Introduction to Nonlinear Analysis
- Ergodic control of partially observed Markov processes with equivalent transition probabilities
- Average Cost Dynamic Programming Equations For Controlled Markov Chains With Partial Observations
- Risk sensitive control of discrete time partially observed Markov Processes with Infinite Horizon
- Filtering of continuous-time Markov chains with noise-free observation and applications
- Dynamic Programming for Ergodic Control of Markov Chains under Partial Observations: A Correction
This page was built for publication: Long Run Control with Degenerate Observation