Bayesian posterior contraction rates for linear severely ill-posed inverse problems
From MaRDI portal
(Redirected from Publication:2016479)
Abstract: We consider a class of linear ill-posed inverse problems arising from inversion of a compact operator with singular values which decay exponentially to zero. We adopt a Bayesian approach, assuming a Gaussian prior on the unknown function. If the observational noise is assumed to be Gaussian then this prior is conjugate to the likelihood so that the posterior distribution is also Gaussian. We study Bayesian posterior consistency in the small observational noise limit. We assume that the forward operator and the prior and noise covariance operators commute with one another. We show how, for given smoothness assumptions on the truth, the scale parameter of the prior can be adjusted to optimize the rate of posterior contraction to the truth, and we explicitly compute the logarithmic rate.
Recommendations
- Posterior contraction rates for the Bayesian approach to linear ill-posed inverse problems
- Bayesian inverse problems with non-conjugate priors
- Bayesian linear inverse problems in regularity scales
- Posterior contraction in Bayesian inverse problems under Gaussian priors
- Bayesian inverse problems with partial observations
Cited in
(29)- Oracle-type posterior contraction rates in Bayesian inverse problems
- Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems
- A posterior contraction for Bayesian inverse problems in Banach spaces
- Bayesian inverse problems with non-conjugate priors
- Bayesian inverse problems with non-commuting operators
- Probabilistic regularization of Fredholm integral equations of the first kind
- Hyperparameter estimation in Bayesian MAP estimation: parameterizations and consistency
- On the well-posedness of Bayesian inverse problems
- Weak-norm posterior contraction rate of the 4DVAR method for linear severely ill-posed problems
- Importance sampling: intrinsic dimension and computational cost
- Bayesian inverse problems with partial observations
- Convergence Rates for Linear Inverse Problems in the Presence of an Additive Normal Noise
- A general approach to posterior contraction in nonparametric inverse problems
- Designing truncated priors for direct and inverse Bayesian problems
- Posterior contraction in Bayesian inverse problems under Gaussian priors
- Posterior contraction for empirical Bayesian approach to inverse problems under non-diagonal assumption
- Bayesian Inverse Problems Are Usually Well-Posed
- Posterior contraction rates for the Bayesian approach to linear ill-posed inverse problems
- Posterior consistency and convergence rates for Bayesian inversion with hypoelliptic operators
- Posterior consistency for Bayesian inverse problems through stability and regression results
- Regularized posteriors in linear ill-posed inverse problems
- Bayesian inversion techniques for stochastic partial differential equations
- Bernstein-von Mises theorems and uncertainty quantification for linear inverse problems
- An improved quasi-reversibility method for a terminal-boundary value multi-species model with white Gaussian noise
- Bayesian linear inverse problems in regularity scales
- Convergence Rates for Learning Linear Operators from Noisy Data
- Analysis of a quasi-reversibility method for nonlinear parabolic equations with uncertainty data
- Solving inverse problems using data-driven models
- Bayesian inverse problems with unknown operators
This page was built for publication: Bayesian posterior contraction rates for linear severely ill-posed inverse problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2016479)