Ergodicity of filtering process by vanishing discount approach
From MaRDI portal
Publication:2474453
DOI10.1016/j.sysconle.2007.08.004zbMath1137.93053OpenAlexW2050221948MaRDI QIDQ2474453
Łukasz Stettner, Giovanni B. Di Masi
Publication date: 6 March 2008
Published in: Systems \& Control Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.sysconle.2007.08.004
Bellman equationinvariant measurePoisson equationhidden Markov processpartially observed control problemvanishing discountfiltering process
Filtering in stochastic control theory (93E11) Discrete-time Markov processes on general state spaces (60J05) Dynamic programming in optimal control and differential games (49L20)
Related Items
Weak Feller property of non-linear filters, The stability of conditional Markov processes and Markov chains in random environments
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Exponential stability for nonlinear filtering
- Stability and uniform approximation of nonlinear filters using the Hilbert metric and application to particle filters
- Stability of nonlinear filters in nonmixing case
- Exponential forgetting and geometric ergodicity for optimal filtering in general state-space models
- Ergodic and adaptive control of hidden Markov models
- A further remark on dynamic programming for partially observed Markov processes
- Ergodicity of hidden Markov models
- Zero-Sum Ergodic Stochastic Games with Feller Transition Probabilities
- On the construction of nearly optimal strategies for a general problem of control of partially observed diffusions
- Ergodic control of partially observed Markov processes with equivalent transition probabilities
- Asymptotic Stability of the Wonham Filter: Ergodic and Nonergodic Signals
- Average Optimality in Dynamic Programming with General State Space
- Risk sensitive control of discrete time partially observed Markov Processes with Infinite Horizon
- A Useful Convergence Theorem for Probability Distributions