An adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift (Q2433262)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | An adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift |
scientific article |
Statements
An adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift (English)
0 references
27 October 2006
0 references
Let \(\chi\) be an open subset of the \(d\)-dimensional Euclidean space \({\mathbb R}^{d}\) equipped with its Borel subsets and \(\pi\) is a positive and continuously differentiable density with respect to the Lebesgue measure on \(\chi.\) The bounded function \(D:\chi \to \chi\) is a drift function. The essence of the Metropolis-Hastings (MH) algorithm with a drift function \(D\) and a proposed density for the generating Markov chain \((X_{n})\) with invariant distribution \(\pi\) is reminded. In section 2 some adaptive schemes for the random walk Metropolis algorithm are extended to more general versions of the MH algorithm, particularly to the Metropolis adjusted Langevin algorithm with a truncated drift. The computing scheme of an adaptive version of the MH algorithm with bounded drift is presented in algorithm 2.1 which is exposed in section 2.1. The ergodicity of the adaptive MH algorithm is studied in section 2.2. Three assumptions are introduced and essentially used to prove the main results. Theorem 2.1 states an order \({\mathcal O}\left( \log n \over n^{\lambda}\right)\) \(({1 \over 2} < \lambda \leq 1)\) of the rate of convergence in \(V\)-norm of the distribution \({\mathcal L}(X_{n})\) of the stochastic process \((X_{n})\) generated by the algorithm 2.1 to the density \(\pi.\) It is shown that for any measurable function \(f\) the quasi-Monte Carlo algorithm for the function \(f\) on the set \((X_{n})\) converges to \(\pi(f)\) as \(n \to \infty.\) In section 2.3 the geometric ergodisity of the MH algorithm is studied. The algorithm 2.1 is illustrated by two simulation examples which are given in section 3. The first example is an illustration of sampling from a 20-dimensional Gaussian distribution. The second example models the failure of pumps at a nuclear plant. In section 4 the convergence of stochastic approximation algorithms is shown. A new approach to analyse stochastic approximation algorithms with Markovian dynamics is developed. The adaptive process \((\sigma_{n}, \mu_{n}, \Gamma_{n})\) in algorithm 2.1 gives a stochastic approximation of the recurrence form of the solution of the equation \(h(\theta) = 0.\) In Theorem 4.1 the asymptotic behaviour of the random process \((\theta_{n}, X_{n})\) is studied by using the mixingale theory. Theorem 2.1 is proved in section 5. Propositions 2.1 and 2.2 are proved in section 6.
0 references
Adaptive Markov chain Monte Carlo
0 references
Metropolis-Hastings algorithm
0 references
Stochastic approximation algorithm
0 references
Langevin algorithms
0 references
numerical examples
0 references
convergence
0 references
quasi-Monte Carlo algorithm
0 references