Convergence of adaptive mixtures of importance sampling schemes

From MaRDI portal
Publication:997389

DOI10.1214/009053606000001154zbMATH Open1132.60022arXiv0708.0711OpenAlexW2996433517WikidataQ60461503 ScholiaQ60461503MaRDI QIDQ997389FDOQ997389


Authors: Randal Douc, Arnaud Guillin, Jean-Michel Marin, Christian P. Robert Robert Edit this on Wikidata


Publication date: 23 July 2007

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao--Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.


Full work available at URL: https://arxiv.org/abs/0708.0711




Recommendations




Cites Work


Cited In (37)





This page was built for publication: Convergence of adaptive mixtures of importance sampling schemes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q997389)