Convergence of Markov chains in information divergence (Q1014048): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
Import240304020342 (talk | contribs)
Set profile property.
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank

Revision as of 02:54, 5 March 2024

scientific article
Language Label Description Also known as
English
Convergence of Markov chains in information divergence
scientific article

    Statements

    Convergence of Markov chains in information divergence (English)
    0 references
    0 references
    0 references
    24 April 2009
    0 references
    Some convergence theorems in probability theory can be reformulated as ``the entropy converges to its maximum''. A. Rényi used information divergence to prove convergence of Markov chains to equilibrium on a finite state space. Later I. Csiszar and D. Kendall extended Rényi's method to provide convergence on countable state spaces. Here, the authors establish convergence in information divergence for a large class of Markov chains. The basic result is that information divergence is continuous under the formation of the intersection of a decreasing sequence of \(\sigma\)-algebras. The same technique can be used to obtain a classical result of Pinsker about continuity under an increasing sequence of \(\sigma\)-algebras.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    information divergence
    0 references
    increasing information
    0 references
    decreasing information
    0 references
    Markov chain
    0 references
    reversible Markov chain
    0 references
    ergodic theorems
    0 references