Convergence of Markov chains in information divergence
From MaRDI portal
Publication:1014048
DOI10.1007/s10959-007-0133-7zbMath1169.60016MaRDI QIDQ1014048
Peter Harremoës, Klaus Kähler Holst
Publication date: 24 April 2009
Published in: Journal of Theoretical Probability (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10959-007-0133-7
Markov chain; ergodic theorems; information divergence; reversible Markov chain; decreasing information; increasing information
60F15: Strong limit theorems
60J10: Markov chains (discrete-time Markov processes on discrete state spaces)
94A15: Information theory (general)
60B11: Probability theory on linear topological spaces
Related Items
Freezing phase transition in a fractal potential, Maximum entropy on compact groups, The maximum entropy rate description of a thermodynamic system in a stationary non-equilibrium state
Cites Work
- Sanov property, generalized I-projection and a conditional limit theorem
- Entropy and the central limit theorem
- I-divergence geometry of probability distributions and minimization problems
- Fisher information inequalities and the central limit theorem
- Information theory and the limit-theorem for Markov chains and processes with a countable infinity of states
- Passage to the Limit under the Information and Entropy Signs
- Entropy and the Law of Small Numbers
- Binomial and Poisson distributions as maximum entropy distributions
- Information projections revisited
- Solution of Shannon’s problem on the monotonicity of entropy
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item