On the ergodicity properties of some adaptive MCMC algorithms (Q862214)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | On the ergodicity properties of some adaptive MCMC algorithms |
scientific article |
Statements
On the ergodicity properties of some adaptive MCMC algorithms (English)
0 references
5 February 2007
0 references
The Markov chain Monte Carlo (MCMC) algorithm is a popular computational method for generating samples with stationary distribution \(\pi.\) The authors propose the integral \(\pi(f) = \int_{X}f(x)\pi(dx),\) where \(X \subset {\mathbb R}^{n}\) is a space which can be high dimensional to be approximated through \(S_{n}(f) = n^{-1}\sum_{k=1}^{n}f(X_{k}).\) Here \(\{X_{k}:k \geq 1\}\) is an ergodic Markov chain on \(X\) with transition probability \(P\) and stationary distribution \(\pi.\) In the introduction a survey of some algorithms as the Metropolis-Hastings (MH) algorithm, the symmetric increments random walk MH algorithm (SRWM) and other results are reminded. In section 2 the main concept of the so-called adaptive MCMC algorithm, depending on the tuning parameter \(\theta,\) defined in the space \(\Theta\) is developed. The authors set \(\theta_{0} = \theta \in \Theta,\) \(X_{0} = x \in X\) and for \(k \geq 0\) the sequence \(\{(X_{k}, \theta_{k}):k \geq 0\}\) is defined recursively: if \(\theta_{k} = \theta_{c},\) then they set \( \theta_{k+1} = \theta_{c}\) and \(X_{k+1} = x,\) otherwise \((X_{k+1}, \theta_{k+1}) \sim Q_{\rho_{k+1}}(X_{k}, \theta_{k});\cdot),\) where \(\rho = (\rho_{k})\) is a sequence of stepsizes. The concept of a homogeneous Markov chain \(\{Z_{k}: k \geq 0\}\) is given. In section 3 three assumptions are introduced. The main result is given in Theorem 8 which is a strong law of large numbers (LLN) for \(S_{n}(f).\) Theorem 8 confirms the convergence in the almost-sure sense of \(S_{n}(f)\) to \(\pi(f).\) Theorem 9 is the main result of section 4. In this theorem stringent conditions required in the LLN for \(S_{n}(f)\) are given. An invariance principle is established. In section 5 two new conditions are introduced. In Theorem 11 the w.p. 1 convergence of the noisy sequence \(\{\theta_{k}\}\) is presented. In section 6 an application of the preliminarily obtained results to the theory of the SRWM algorithm is given. Theorem 15 confirms that the strong LLN holds for any \(\alpha \in [0,1)\) and for any function \(f \in {\mathcal L}(W^{\alpha})\) and that the central limit theorem holds for any function \(f \in {\mathcal L}(W^{\alpha \over 2}).\) In section 7 an application of the obtained results to the independent MH algorithm is presented. General properties required for the LLN and the invariance principle are developed. The main result of this section is Theorem 21. For any function \(f \in {\mathcal L}_{V^{\alpha}}\) the convergence of \(S_{n}(f)\) to \(\pi(f)\) in the almost-sure sense is shown and for any function \(f \in {\mathcal L}_{V^{\alpha \over 2}}\) the convergence of \(S_{n}(f)\) to \(\pi(f)\) in the distribution sense is presented.
0 references
Adaptive Markov chain Monte Carlo algorithm
0 references
self-tuning algorithm
0 references
Metropolis-Hastings algorithm
0 references
Stochastic approximation
0 references
state-dependent noise
0 references
randomly varying truncation
0 references
martingale
0 references
Poisson method
0 references