Point process-based Monte Carlo estimation

From MaRDI portal
Publication:517403

DOI10.1007/S11222-015-9617-YzbMATH Open1505.62414arXiv1412.6368OpenAlexW2963963665MaRDI QIDQ517403FDOQ517403


Authors: Clément Walter Edit this on Wikidata


Publication date: 23 March 2017

Published in: Statistics and Computing (Search for Journal in Brave)

Abstract: This paper addresses the issue of estimating the expectation of a real-valued random variable of the form X=g(mathbfU) where g is a deterministic function and mathbfU can be a random finite- or infinite-dimensional vector. Using recent results on rare event simulation, we propose a unified framework for dealing with both probability and mean estimation for such random variables, emph{i.e.} linking algorithms such as Tootsie Pop Algorithm (TPA) or Last Particle Algorithm with nested sampling. Especially, it extends nested sampling as follows: first the random variable X does not need to be bounded any more: it gives the principle of an ideal estimator with an infinite number of terms that is unbiased and always better than a classical Monte Carlo estimator -- in particular it has a finite variance as soon as there exists kinmathbbR>1 such that operatornameE[Xk]<infty. Moreover we address the issue of nested sampling termination and show that a random truncation of the sum can preserve unbiasedness while increasing the variance only by a factor up to 2 compared to the ideal case. We also build an unbiased estimator with fixed computational budget which supports a Central Limit Theorem and discuss parallel implementation of nested sampling, which can dramatically reduce its computational cost. Finally we extensively study the case where X is heavy-tailed.


Full work available at URL: https://arxiv.org/abs/1412.6368




Recommendations




Cites Work


Cited In (7)





This page was built for publication: Point process-based Monte Carlo estimation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q517403)