Convergence of adaptive mixtures of importance sampling schemes (Q997389)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Convergence of adaptive mixtures of importance sampling schemes |
scientific article |
Statements
Convergence of adaptive mixtures of importance sampling schemes (English)
0 references
23 July 2007
0 references
Let \(\pi\) be a probability distribution, \(\pi\) is dominated by a reference measure \(\mu\), \(\pi (dx) =\pi (x)\,d\mu (x)\), where \(\pi (x)\) is density. Let \(\pi(f) = \int f(x) \pi(dx).\) If we can obtain an i.i.d. sample \(x_1, \dots, x_N\) simulated from \(\pi\), then \(N^{-1} \sum_{i=1}^N f(x_i) = \widehat{\pi}_N (f)\) converges to \(\pi (f)\) as \(N \to \infty\) with probability one and we can approximate \(\pi (f)\) by \(\pi_N (f).\) As the normalizing constant of the distribution \(\pi\) is unknown, it is not possible to use the estimator \(\widehat{\pi}_N (f)\) directly. The authors propose an algorithm for the estimation \(\pi (f).\) The authors derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao-Blackwellized asymptotically achieve an optimum in terms of a Kullback divergence criterion.
0 references
Monte Carlo calibration
0 references
Kullback divergence
0 references
population Monte Carlo algorithm
0 references
stochastic approximation
0 references
0 references
0 references