Markov chain decomposition for convergence rate analysis (Q1872401): Difference between revisions
From MaRDI portal
Created a new Item |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 11:45, 1 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Markov chain decomposition for convergence rate analysis |
scientific article |
Statements
Markov chain decomposition for convergence rate analysis (English)
0 references
6 May 2003
0 references
The authors consider a reversible Markov chain on a state space \(\Omega\) and estimate its special gap. Let \(P(x,dy)\) be the transition probability kernel of a Markov chain that is reversible with respect to a probability density. The authors describe the ``pieces'' of the chain \(P\). Let \(A_1,\dots, A_m\) be subsets of \(\Omega\) such that \(\bigcup A_i=\Omega\). For each \(i= 1,\dots, m\), the authors define a new Markov chain on \(A_i\). The transition kernel \(P_{[A_i]}\) of the new chain is \[ P_{[A_i]}(x,B)= P(x,B)+ 1_{[x\in B]} P(x, A^c_i),\qquad x\in A_i,\;B\subset A_i.\tag{1} \] Now they define \[ Z:= \sum^m_{i=1} \pi[A_i]\tag{2} \] and the ``maximum overlap'' \(\Theta\) of the covering \([A_1,\dots, A_m]\) by \[ \Theta:= \max_{x\in\Omega}\|i: x\in A_i\|.\tag{3} \] Then \[ 1\leq Z\leq\Theta.\tag{4} \] Also they introduce a crude model of the movement of the original chain among the ``pieces''. They consider a state space \(a_1,\dots, a_m\) of \(m\) points representing our \(m\) pieces and define the transition probabilities for a discrete Markov chain on this finite state space: \[ P_H(a_i, a_j)= {\pi[A_i, A_j]\over \Theta\pi[A_i]}\quad\text{and}\quad P_H(a_i, a_i)= 1-\sum_{j\neq i} P_H(a_i, a_j).\tag{5} \] To describe the rate of convergence to equilibrium, the authors use the spectral gap. For example the first theorem presented is Theorem 1.1 (State decomposition theorem). With the equations (1)--(5) it results \[ \text{Gap}(P)\geq {1\over\Theta^2} \text{Gap}(P_H(\min\text{ Gap } P_{[A_i]})),\qquad i= 1,\dots, m. \] Section 2 treats simulated tempering. Section 3 is devoted to the Metropolis algorithm and umbrella sampling. Section 4 is entitled ``State decomposition''. Section 5 treats density decomposition. The last part of the note contains an appendix, in which the authors consider the Caraciolo-Pelissetto-Sokal result and the Madras-Piccioni result.
0 references
Markov chain
0 references
Monte Carlo
0 references
spectral gap
0 references
Metropolis-Hastings algorithm
0 references
simulated tempering
0 references
decomposition
0 references