Maximum a posteriori estimates in linear inverse problems with log-concave priors are proper Bayes estimators
From MaRDI portal
Publication:2936498
Abstract: A frequent matter of debate in Bayesian inversion is the question, which of the two principle point-estimators, the maximum-a-posteriori (MAP) or the conditional mean (CM) estimate is to be preferred. As the MAP estimate corresponds to the solution given by variational regularization techniques, this is also a constant matter of debate between the two research areas. Following a theoretical argument - the Bayes cost formalism - the CM estimate is classically preferred for being the Bayes estimator for the mean squared error cost while the MAP estimate is classically discredited for being only asymptotically the Bayes estimator for the uniform cost function. In this article we present recent theoretical and computational observations that challenge this point of view, in particular for high-dimensional sparsity-promoting Bayesian inversion. Using Bregman distances, we present new, proper convex Bayes cost functions for which the MAP estimator is the Bayes estimator. We complement this finding by results that correct further common misconceptions about MAP estimates. In total, we aim to rehabilitate MAP estimates in linear inverse problems with log-concave priors as proper Bayes estimators.
Recommendations
- Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems
- MAP estimators and their consistency in Bayesian nonparametric inverse problems
- Model distortions in Bayesian MAP reconstruction
- Maximum a posteriori estimators as a limit of Bayes estimators
- Regularized posteriors in linear ill-posed inverse problems
Cited in
(33)- Recent trends on nonlinear filtering for inverse problems
- What do we hear from a drum? A data-consistent approach to quantifying irreducible uncertainty on model inputs by extracting information from correlated model output data
- Variational Bayes' Method for Functions with Applications to Some Inverse Problems
- Wavelet-based priors accelerate maximum-a-posteriori optimization in Bayesian inverse problems
- Bayesian inverse problems and Kalman filters
- Hyperparameter estimation in Bayesian MAP estimation: parameterizations and consistency
- Maximum a posteriori estimators as a limit of Bayes estimators
- Maximum-a-posteriori estimation with Bayesian confidence regions
- Stein variational gradient descent on infinite-dimensional space and applications to statistical inverse problems
- Data-consistent inversion for stochastic input-to-output maps
- Diffusion tensor imaging with deterministic error bounds
- Analysis of the ensemble and polynomial chaos Kalman filters in Bayesian inverse problems
- Recursive linearization method for inverse medium scattering problems with complex mixture Gaussian error learning
- Equivalence of weak and strong modes of measures on topological vector spaces
- Connecting Hamilton-Jacobi partial differential equations with maximum a posteriori and posterior mean estimators for some non-convex priors
- Posterior contraction for empirical Bayesian approach to inverse problems under non-diagonal assumption
- Generalized Modes in Bayesian Inverse Problems
- Maximum a posteriori estimators in ℓp are well-defined for diagonal Gaussian priors
- Model distortions in Bayesian MAP reconstruction
- MAP estimators for piecewise continuous inversion
- On Bayesian posterior mean estimators in imaging sciences and Hamilton-Jacobi partial differential equations
- MAP estimators and their consistency in Bayesian nonparametric inverse problems
- Well-Posed Bayesian Inverse Problems: Priors with Exponential Tails
- A logarithmic image prior for blind deconvolution
- Reconciling Bayesian and perimeter regularization for binary inversion
- Priorconditioned CGLS-based quasi-MAP estimate, statistical stopping rule, and ranking of priors
- On Bayesian estimation and proximity operators
- A scalable algorithm for MAP estimators in Bayesian inverse problems with Besov priors
- Foundations of Bayesian inference for complex statistical models. Abstracts from the workshop held May 2--8, 2021 (hybrid meeting)
- Solving inverse problems using data-driven models
- Solution paths of variational regularization methods for inverse problems
- Physics-informed machine learning with conditional Karhunen-Loève expansions
- Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems
This page was built for publication: Maximum a posteriori estimates in linear inverse problems with log-concave priors are proper Bayes estimators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2936498)