Maximuma posterioriestimates in linear inverse problems with log-concave priors are proper Bayes estimators
From MaRDI portal
Publication:2936498
DOI10.1088/0266-5611/30/11/114004zbMATH Open1302.62010arXiv1402.5297OpenAlexW2064991026MaRDI QIDQ2936498FDOQ2936498
Publication date: 17 December 2014
Published in: Inverse Problems (Search for Journal in Brave)
Abstract: A frequent matter of debate in Bayesian inversion is the question, which of the two principle point-estimators, the maximum-a-posteriori (MAP) or the conditional mean (CM) estimate is to be preferred. As the MAP estimate corresponds to the solution given by variational regularization techniques, this is also a constant matter of debate between the two research areas. Following a theoretical argument - the Bayes cost formalism - the CM estimate is classically preferred for being the Bayes estimator for the mean squared error cost while the MAP estimate is classically discredited for being only asymptotically the Bayes estimator for the uniform cost function. In this article we present recent theoretical and computational observations that challenge this point of view, in particular for high-dimensional sparsity-promoting Bayesian inversion. Using Bregman distances, we present new, proper convex Bayes cost functions for which the MAP estimator is the Bayes estimator. We complement this finding by results that correct further common misconceptions about MAP estimates. In total, we aim to rehabilitate MAP estimates in linear inverse problems with log-concave priors as proper Bayes estimators.
Full work available at URL: https://arxiv.org/abs/1402.5297
Bayesian inference (62F15) Bayesian problems; characterization of Bayes procedures (62C10) Numerical solution to inverse problems in abstract spaces (65J22)
Cited In (23)
- Variational Bayes' Method for Functions with Applications to Some Inverse Problems
- Recent trends on nonlinear filtering for inverse problems
- What do we hear from a drum? A data-consistent approach to quantifying irreducible uncertainty on model inputs by extracting information from correlated model output data
- Wavelet-based priors accelerate maximum-a-posteriori optimization in Bayesian inverse problems
- Data-consistent inversion for stochastic input-to-output maps
- Recursive linearization method for inverse medium scattering problems with complex mixture Gaussian error learning
- Equivalence of weak and strong modes of measures on topological vector spaces
- Diffusion tensor imaging with deterministic error bounds
- Reconciling Bayesian and Perimeter Regularization for Binary Inversion
- Connecting Hamilton-Jacobi partial differential equations with maximum a posteriori and posterior mean estimators for some non-convex priors
- Generalized Modes in Bayesian Inverse Problems
- Posterior contraction for empirical Bayesian approach to inverse problems under non-diagonal assumption
- On Bayesian posterior mean estimators in imaging sciences and Hamilton-Jacobi partial differential equations
- Well-Posed Bayesian Inverse Problems: Priors with Exponential Tails
- Analysis of the Ensemble and Polynomial Chaos Kalman Filters in Bayesian Inverse Problems
- A logarithmic image prior for blind deconvolution
- On Bayesian estimation and proximity operators
- Bayesian Inverse Problems and Kalman Filters
- Foundations of Bayesian inference for complex statistical models. Abstracts from the workshop held May 2--8, 2021 (hybrid meeting)
- Stein Variational Gradient Descent on Infinite-Dimensional Space and Applications to Statistical Inverse Problems
- Solving inverse problems using data-driven models
- Solution paths of variational regularization methods for inverse problems
- Physics-informed machine learning with conditional Karhunen-Loève expansions
This page was built for publication: Maximuma posterioriestimates in linear inverse problems with log-concave priors are proper Bayes estimators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2936498)