Geometric convergence rates for time-sampled Markov chains (Q1411351): Difference between revisions
From MaRDI portal
Added link to MaRDI item. |
Created claim: MaRDI profile type (P1460): MaRDI publication profile (Q5976449), #quickstatements; #temporary_batch_1710401496743 |
||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank |
Latest revision as of 08:45, 14 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Geometric convergence rates for time-sampled Markov chains |
scientific article |
Statements
Geometric convergence rates for time-sampled Markov chains (English)
0 references
27 October 2003
0 references
Consider a Markov chain \(X_0,X_1,X_2,\dots\) on a state space \({\mathcal X}\), with the transition probabilities \(P(x,\cdot)\) and stationary distribution \(\pi(\cdot)\). Such schemes are used to estimate \(\pi(h)= \int_{\mathcal X}h\,d\pi\) for various functionals \(h:{\mathcal X}\to R\), by e.g. \(\widehat{\pi}(h)= \frac1h \sum_{i=1}^n\). Certain Markov chains may have good asymptotic variance properties, but still have poor distributional convergence properties. It was argued by the author (2003) that in such cases, it was certain instead to consider a time-sampled chain of the form \(P^\mu= \sum_n \mu(n)P^n\), where \(\mu\) is a probability measure on the nonnegative integers. Then \((P^\mu)^m= P^{\mu^{*m}}\) where \(\mu^{*m}\) is the \(m\)-fold convolution of \(\mu\) with itself. Equivalently, \((P^\mu)^m\) is generated by choosing \(T_m\sim \mu^m\) independently of \(\{X_n\}\) and considering \(X_{T_m}\). The second part is entitled ``Distributional convergence rates''. Markov chain convergence rates can sometimes be proved by establishing minorisation conditions of the form \(P(x,A)\geq \varepsilon\nu(A)\), \(x\in C\), \(A\subseteq{\mathcal X}\), where \(C\subseteq{\mathcal X}\) is called a small set, and \(\varepsilon>0\), \(\nu(\cdot)\) is some probability measure on \({\mathcal X}\). The third part is devoted to the ``Applications to time-sampled chains'' and contains two theorems, for example: Theorem 5. Consider a Markov chain \(X_1,X_2,\dots\) on a state space \({\mathcal X}\), with transition probabilities \(P(x;\cdot)\) and stationary distribution \(\pi(\cdot)\). Suppose \((P\times P)h(x,y)\leq h(x,y)/ \alpha\) for \((x,y)\not\in C\times C\), where \(\alpha>1\) and \(h:{\mathcal X}\times {\mathcal X}\to [1,\infty)\). Suppose also that \(P(x,\cdot)\geq \varepsilon\nu(\cdot)\) for all \(x\in C\). Then for any nonnegative integers \(j\) and \(m\), \[ \|{\mathcal L}(X_{m+T_j})- \pi(\cdot)\|_{TV}\leq (1-\varepsilon)^j+ \alpha^{-m-1} A_\mu^{j-1} E_\pi[h(X_0,Y_0)], \] where \(T_j\sim \mu^{*j}\) is chosen independently of \(\{X_n\}\), where the expectation is taken with respect to the initial distribution \({\mathcal L}(X_0)\) and \(Y\to\pi(\cdot)\) and \(A_\mu= \sup_{x,y\in C} (P^\mu\times P^\mu) h(x,y)\). The author presents some examples. For further details see the author's references.
0 references
Markov chains
0 references
MCMC algorithms
0 references