Unbiased estimation of the gradient of the log-likelihood in inverse problems

From MaRDI portal
Publication:2058724

DOI10.1007/S11222-021-09994-6zbMATH Open1475.62038arXiv2003.04896OpenAlexW3155455898MaRDI QIDQ2058724FDOQ2058724

Ajay Jasra, Deng Lu, K. J. H. Law

Publication date: 9 December 2021

Published in: Statistics and Computing (Search for Journal in Brave)

Abstract: We consider the problem of estimating a parameter associated to a Bayesian inverse problem. Treating the unknown initial condition as a nuisance parameter, typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm provides the possibility for parallel computation on arbitrarily many processors without any loss of efficiency, asymptotically. In practice, this means any precision can be achieved in a fixed, finite constant time, provided that enough processors are available.


Full work available at URL: https://arxiv.org/abs/2003.04896





Cites Work


Cited In (9)






This page was built for publication: Unbiased estimation of the gradient of the log-likelihood in inverse problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2058724)