Bayesian posterior contraction rates for linear severely ill-posed inverse problems
From MaRDI portal
Publication:2016479
Abstract: We consider a class of linear ill-posed inverse problems arising from inversion of a compact operator with singular values which decay exponentially to zero. We adopt a Bayesian approach, assuming a Gaussian prior on the unknown function. If the observational noise is assumed to be Gaussian then this prior is conjugate to the likelihood so that the posterior distribution is also Gaussian. We study Bayesian posterior consistency in the small observational noise limit. We assume that the forward operator and the prior and noise covariance operators commute with one another. We show how, for given smoothness assumptions on the truth, the scale parameter of the prior can be adjusted to optimize the rate of posterior contraction to the truth, and we explicitly compute the logarithmic rate.
Recommendations
- Posterior contraction rates for the Bayesian approach to linear ill-posed inverse problems
- Bayesian inverse problems with non-conjugate priors
- Bayesian linear inverse problems in regularity scales
- Posterior contraction in Bayesian inverse problems under Gaussian priors
- Bayesian inverse problems with partial observations
Cited in
(29)- Bayesian linear inverse problems in regularity scales
- A general approach to posterior contraction in nonparametric inverse problems
- On the well-posedness of Bayesian inverse problems
- Probabilistic regularization of Fredholm integral equations of the first kind
- Bayesian inverse problems with unknown operators
- A posterior contraction for Bayesian inverse problems in Banach spaces
- Hyperparameter estimation in Bayesian MAP estimation: parameterizations and consistency
- Regularized posteriors in linear ill-posed inverse problems
- Bayesian inverse problems with non-commuting operators
- Posterior contraction rates for the Bayesian approach to linear ill-posed inverse problems
- Bayesian inverse problems with partial observations
- Importance sampling: intrinsic dimension and computational cost
- Designing truncated priors for direct and inverse Bayesian problems
- Bayesian inverse problems with non-conjugate priors
- Bayesian inversion techniques for stochastic partial differential equations
- Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems
- Solving inverse problems using data-driven models
- Bernstein-von Mises theorems and uncertainty quantification for linear inverse problems
- Oracle-type posterior contraction rates in Bayesian inverse problems
- Analysis of a quasi-reversibility method for nonlinear parabolic equations with uncertainty data
- Convergence Rates for Linear Inverse Problems in the Presence of an Additive Normal Noise
- Posterior contraction for empirical Bayesian approach to inverse problems under non-diagonal assumption
- Convergence Rates for Learning Linear Operators from Noisy Data
- Posterior consistency and convergence rates for Bayesian inversion with hypoelliptic operators
- Posterior contraction in Bayesian inverse problems under Gaussian priors
- Posterior consistency for Bayesian inverse problems through stability and regression results
- Weak-norm posterior contraction rate of the 4DVAR method for linear severely ill-posed problems
- Bayesian Inverse Problems Are Usually Well-Posed
- An improved quasi-reversibility method for a terminal-boundary value multi-species model with white Gaussian noise
This page was built for publication: Bayesian posterior contraction rates for linear severely ill-posed inverse problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2016479)