Renewal theory and computable convergence rates for geometrically erdgodic Markov chains (Q1774194)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Renewal theory and computable convergence rates for geometrically erdgodic Markov chains |
scientific article |
Statements
Renewal theory and computable convergence rates for geometrically erdgodic Markov chains (English)
0 references
29 April 2005
0 references
The author considers the class \(\{X_n\); \(n\geq 0\}\) of time homogeneous, geometrically ergodic Markov chains on a measurable function state space. The aim of the paper is to provide consistently improved results on computing the bounds for the rate of convergence of the time \(n\) transition probability operator to a (unique) invariant probability measure \(\pi\). These computable bound estimates are of special importance for the simulation techniques such as Markov chain Monte Carlo (MCMC) ones. Two basic methods have been used to obtain computable convergence rates. The first method, introduced by \textit{S. D. Meyn} and \textit{R. L. Tweedie} [Ann. Appl. Probab. 4, No. 4, 981--1011 (1994; Zbl 0812.60059)], is based on renewal theory. Relying on the basic assumptions of (1) minorization condition, (a2) drift condition, and (a3) strong aperiodicity condition, and applying the technique of regenerative (first-entrance-last-exit) decomposition, the author derives precise estimates. The second method to compute bounds for the rate of convergence of the transition probabilities of the considered Markov chains is based on coupling theory [J. S. Rosenthal (1995)]. The paper shows that for stochastically monotone Markov chains, the coupling method appears to be close to optimal. In the absence of stochastic monotonicity, the drift condition (a2) is needed for the bivariate process. The provided estimates are valid for a very large class of Markov chains and, for special cases, one can obtain noticeable improvements on the computable bounds. For instance, this is the situation when the Markov chain is reversible as in Markov chain Monte Carlo approach (the Metropolis-Hastings algorithm and the random scan Gibbs sampler), and especially when the Markov chain is also positive. Numerical computation examples (reflecting random walk, Metropolis-Hastings algorithm for normal distribution, contracting normals) illustrate and compare the proposed techniques with the classical methods and point out improved estimates in various settings.
0 references
geometric ergodicity
0 references
reversible Markov chain
0 references
Markov chain Monte Carlo
0 references
Metropolis-Hastings algorithm
0 references
stochastically monotone Markov chains
0 references
coupling theory
0 references
spectral gap
0 references
0 references
0 references