Fast sampling in a linear-Gaussian inverse problem

From MaRDI portal
Publication:3179326

DOI10.1137/15M1029527zbMATH Open1398.94081arXiv1507.01614OpenAlexW2963901148MaRDI QIDQ3179326FDOQ3179326


Authors: Richard A. Norton, Colin Fox Edit this on Wikidata


Publication date: 21 December 2016

Published in: SIAM/ASA Journal on Uncertainty Quantification (Search for Journal in Brave)

Abstract: We solve the inverse problem of deblurring a pixelized image of Jupiter using regularized deconvolution and by sample-based Bayesian inference. By efficiently sampling the marginal posterior distribution for hyperparameters, then the full conditional for the deblurred image, we find that we can evaluate the posterior mean faster than regularized inversion, when selection of the regularizing parameter is considered. To our knowledge, this is the first demonstration of sampling and inference that takes less compute time than regularized inversion in an inverse problems. Comparison to random-walk Metropolis-Hastings and block Gibbs MCMC shows that marginal then conditional sampling also outperforms these more common sampling algorithms, having better scaling with problem size. When problem-specific computations are feasible the asymptotic cost of an independent sample is one linear solve, implying that sample-based Bayesian inference may be performed directly over function spaces, when that limit exists.


Full work available at URL: https://arxiv.org/abs/1507.01614




Recommendations




Cites Work


Cited In (21)

Uses Software





This page was built for publication: Fast sampling in a linear-Gaussian inverse problem

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3179326)