Unbiased estimation of the gradient of the log-likelihood in inverse problems
From MaRDI portal
Publication:2058724
Abstract: We consider the problem of estimating a parameter associated to a Bayesian inverse problem. Treating the unknown initial condition as a nuisance parameter, typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm provides the possibility for parallel computation on arbitrarily many processors without any loss of efficiency, asymptotically. In practice, this means any precision can be achieved in a fixed, finite constant time, provided that enough processors are available.
Recommendations
- On Unbiased Estimation for Discretized Models
- Unbiased estimation of the gradient of the log-likelihood for a class of continuous-time state-space models
- Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs
- Gradient of the log-likelihood ratio for infinite-dimensional stochastic systems
- Unbiased Markov chain Monte Carlo for intractable target distributions
Cites work
- scientific article; zbMATH DE number 48727 (Why is no real title available?)
- scientific article; zbMATH DE number 1972910 (Why is no real title available?)
- scientific article; zbMATH DE number 2106098 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- A general method for debiasing a Monte Carlo estimator
- Inference in hidden Markov models.
- Inverse problems: a Bayesian perspective
- Mean field simulation for Monte Carlo integration
- Multilevel sequential Monte Carlo samplers
- Multilevel sequential Monte Carlo with dimension-independent likelihood-informed proposals
- Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
- Sequential Monte Carlo Samplers
- The approximate solution of Fredholm integral equations of the first kind
- Unbiased Monte Carlo: posterior estimation for intractable/infinite-dimensional models
- Unbiased estimation with square root convergence for SDE models
- Unbiased estimators and multilevel Monte Carlo
- Uncertainty Quantification and Weak Approximation of an Elliptic Inverse Problem
- Well-posed stochastic extensions of ill-posed linear problems
Cited in
(10)- Unbiased Estimation Using Underdamped Langevin Dynamics
- Coordinate Based Empirical Likelihood-Like Estimation in Ill-Conditioned Inverse Problems
- Multi-index sequential Monte Carlo ratio estimators for Bayesian inverse problems
- A randomized multi-index sequential Monte Carlo method
- Efficient importance sampling for large sums of independent and identically distributed random variables
- Unbiased parameter estimation for partially observed diffusions
- On unbiased backtransform of lognormal kriging estimates
- On Unbiased Estimation for Discretized Models
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
- Unbiased estimation of the gradient of the log-likelihood for a class of continuous-time state-space models
This page was built for publication: Unbiased estimation of the gradient of the log-likelihood in inverse problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2058724)