Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors
From MaRDI portal
Publication:4920045
Abstract: Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this article, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis-Hastings (MH) sampling schemes: We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference.
Recommendations
- Fast Gibbs sampling for high-dimensional Bayesian inversion
- Low-rank independence samplers in hierarchical Bayesian inverse problems
- Bayesian inverse problems with \(l_1\) priors: a randomize-then-optimize approach
- Sparse reconstructions from few noisy data: analysis of hierarchical Bayesian models with generalized gamma hyperpriors
- MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure
Cited in
(20)- Fast sampling in a linear-Gaussian inverse problem
- Model selection in the sparsity context for inverse problems in Bayesian framework
- A proximal Markov chain Monte Carlo method for Bayesian inference in imaging inverse problems: when Langevin meets Moreau
- Bayesian approach for inverse interior scattering problems with limited aperture
- Enhanced sampling schemes for MCMC based blind Bernoulli-Gaussian deconvolution
- Low-rank independence samplers in hierarchical Bayesian inverse problems
- Certified coordinate selection for high-dimensional Bayesian inversion with Laplace prior
- Fast Gibbs sampling for high-dimensional Bayesian inversion
- Comparison of statistical inversion with iteratively regularized Gauss Newton method for image reconstruction in electrical impedance tomography
- Two Metropolis--Hastings Algorithms for Posterior Measures with Non-Gaussian Priors in Infinite Dimensions
- Computationally efficient sampling methods for sparsity promoting hierarchical Bayesian models
- A hierarchical Bayesian perspective on majorization-minimization for non-convex sparse regression: application to M/EEG source imaging
- Sparse Online Variational Bayesian Regression
- MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure
- Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau
- Sparse reconstructions from few noisy data: analysis of hierarchical Bayesian models with generalized gamma hyperpriors
- Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs
- Bayesian inverse problems with \(l_1\) priors: a randomize-then-optimize approach
- Optimization-Based Markov Chain Monte Carlo Methods for Nonlinear Hierarchical Statistical Inverse Problems
- Solution paths of variational regularization methods for inverse problems
This page was built for publication: Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4920045)