Unbiased estimation of the gradient of the log-likelihood in inverse problems
From MaRDI portal
Publication:2058724
DOI10.1007/S11222-021-09994-6zbMATH Open1475.62038arXiv2003.04896OpenAlexW3155455898MaRDI QIDQ2058724FDOQ2058724
Ajay Jasra, Deng Lu, K. J. H. Law
Publication date: 9 December 2021
Published in: Statistics and Computing (Search for Journal in Brave)
Abstract: We consider the problem of estimating a parameter associated to a Bayesian inverse problem. Treating the unknown initial condition as a nuisance parameter, typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm provides the possibility for parallel computation on arbitrarily many processors without any loss of efficiency, asymptotically. In practice, this means any precision can be achieved in a fixed, finite constant time, provided that enough processors are available.
Full work available at URL: https://arxiv.org/abs/2003.04896
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Sequential Monte Carlo Samplers
- Inference in hidden Markov models.
- Inverse problems: a Bayesian perspective
- Well-posed stochastic extensions of ill-posed linear problems
- Mean field simulation for Monte Carlo integration
- Uncertainty Quantification and Weak Approximation of an Elliptic Inverse Problem
- Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
- Unbiased Estimators and Multilevel Monte Carlo
- Unbiased estimation with square root convergence for SDE models
- A general method for debiasing a Monte Carlo estimator
- Multilevel sequential Monte Carlo samplers
- The approximate solution of Fredholm integral equations of the first kind
- Unbiased Monte Carlo: posterior estimation for intractable/infinite-dimensional models
- Multilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed Proposals
Cited In (9)
- Unbiased Estimation Using Underdamped Langevin Dynamics
- Multi-index sequential Monte Carlo ratio estimators for Bayesian inverse problems
- Coordinate Based Empirical Likelihood-Like Estimation in Ill-Conditioned Inverse Problems
- A randomized multi-index sequential Monte Carlo method
- Efficient importance sampling for large sums of independent and identically distributed random variables
- Unbiased parameter estimation for partially observed diffusions
- On unbiased backtransform of lognormal kriging estimates
- On Unbiased Estimation for Discretized Models
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
This page was built for publication: Unbiased estimation of the gradient of the log-likelihood in inverse problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2058724)