Quantitative convergence rates of Markov chains: A simple account (Q1860589)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Quantitative convergence rates of Markov chains: A simple account |
scientific article |
Statements
Quantitative convergence rates of Markov chains: A simple account (English)
0 references
25 February 2003
0 references
Let the transition kernel for a Markov chain be defined on a state space \(X\). Suppose two different copies of the chain, \(\{X_n\}\) and \(\{ X_n'\}\), starting from two different initial distributions \(\alpha (X_0)\) and \(\alpha (X_0')\). The total variation distance between the two copies after \(k\) steps of the chain is defined by \[ \bigl\|\alpha(x_k)-\alpha(x_k') \bigr \|_{\text{TV}}= \sup_{A\subseteq X}\bigl |P(x_k\in A)-P(x_k'\in A)\bigr |. \] Such quantitative bounds on convergence rates of Markov chains have been motivated by interest in Markov chain (MCMC) algorithms including the Gibbs sampler and the Metropolis-Hastings algorithm. The author presents such a quantitative bound result on the total variation distance after \(k\) iterations between two Markov chains with different initial distributions but identical transition probabilities. The result is a simplified and improved version of the result of J. S. Rosenthal (1995) which also takes into account the \(\varepsilon \)-improvement of \textit{G. O. Roberts} and \textit{R. L. Tweedie} [Stochastic Processes Appl. 80, No. 2, 211-229 (1999; Zbl 0961.60066)] and which follows as a special case of the complicated time-inhomogeneous results of Douc et al. (2002).
0 references
Markov chain
0 references
convergence rate
0 references
mixing time
0 references
drift condition
0 references
minorisation condition
0 references
total variation distance
0 references