Convergence of Markov chains in information divergence (Q1014048): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10959-007-0133-7 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2058409688 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Solution of Shannon’s problem on the monotonicity of entropy / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5186515 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Entropy and the central limit theorem / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5737389 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5539510 / rank
 
Normal rank
Property / cites work
 
Property / cites work: I-divergence geometry of probability distributions and minimization problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sanov property, generalized I-projection and a conditional limit theorem / rank
 
Normal rank
Property / cites work
 
Property / cites work: Information projections revisited / rank
 
Normal rank
Property / cites work
 
Property / cites work: Passage to the Limit under the Information and Entropy Signs / rank
 
Normal rank
Property / cites work
 
Property / cites work: Binomial and Poisson distributions as maximum entropy distributions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3158591 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fisher information inequalities and the central limit theorem / rank
 
Normal rank
Property / cites work
 
Property / cites work: Information theory and the limit-theorem for Markov chains and processes with a countable infinity of states / rank
 
Normal rank
Property / cites work
 
Property / cites work: Entropy and the Law of Small Numbers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3289420 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3774632 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4047443 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4184753 / rank
 
Normal rank

Latest revision as of 12:52, 1 July 2024

scientific article
Language Label Description Also known as
English
Convergence of Markov chains in information divergence
scientific article

    Statements

    Convergence of Markov chains in information divergence (English)
    0 references
    0 references
    0 references
    24 April 2009
    0 references
    Some convergence theorems in probability theory can be reformulated as ``the entropy converges to its maximum''. A. Rényi used information divergence to prove convergence of Markov chains to equilibrium on a finite state space. Later I. Csiszar and D. Kendall extended Rényi's method to provide convergence on countable state spaces. Here, the authors establish convergence in information divergence for a large class of Markov chains. The basic result is that information divergence is continuous under the formation of the intersection of a decreasing sequence of \(\sigma\)-algebras. The same technique can be used to obtain a classical result of Pinsker about continuity under an increasing sequence of \(\sigma\)-algebras.
    0 references
    information divergence
    0 references
    increasing information
    0 references
    decreasing information
    0 references
    Markov chain
    0 references
    reversible Markov chain
    0 references
    ergodic theorems
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references