Adaptive independent Metropolis-Hastings (Q1009493)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Adaptive independent Metropolis-Hastings |
scientific article |
Statements
Adaptive independent Metropolis-Hastings (English)
0 references
2 April 2009
0 references
The aim of the article is to analyze a new algorithm for producing samples from a distribution \(\pi\), when direct sampling is not possible. The authors propose an adaptive Metropolis-Hastings algorithm. At iteration \(i\), it uses a proposal function \(q_i\) that depends on all earlier proposed states, except the current, and a Metropolis-Hastings-like acceptance step. It requires that the supremum of \(\pi/q_i\) is finite (Doeblin condition). In contrast, the adaptive independent Metropolis-Hastings algorithm of \textit{J. Gåsemyr} [Scand. J. Stat. 30, 159--173 (2003; Zbl 1038.65005)] requires that the supremum of the ratio \(f/q_i\) is known, where \(f \propto \pi\), and this supremum is used in the algorithm. The authors establish that the limiting density \(\pi\) conditioned on the history is invariant for the algorithm. They prove a bound on the convergence that depends on the supremum of \(\pi/q_i\): the convergence is geometric provided a strong Doeblin condition is satisfied (all the proposal distributions have uniformly heavier tails than the target distribution \(\pi\)). In this case, the algorithm gives exact samples within a finite numbers of iterations with probability arbitrarily close to \(1\). The bound is similar to the convergence bounds for the Metropolis-Hastings given by \textit{L. Holden} [Stat. Probab. Lett. 39, No.~4, 371--377 (1998; Zbl 0914.60043)]. The algorithm is tested in four examples, that are difficult for MCMC methods (three with several modes and one with a heavy tail distribution). Three examples are nonparametic, and one is parametric. The proposed algorithm performs better than the standard MCMC alternatives used for comparison, even when the Doeblin condition is not satisfied.
0 references
sampling algorithm
0 references
Metropolis-Hastings algorithm
0 references
adaptive algorithm
0 references
adaptive chain
0 references
Markov Chain Monte Carlo
0 references
inverse problems
0 references
numerical examples
0 references
convergence
0 references
Doeblin condition
0 references
0 references