Rate-optimal refinement strategies for local approximation MCMC
From MaRDI portal
Abstract: Many Bayesian inference problems involve target distributions whose density functions are computationally expensive to evaluate. Replacing the target density with a local approximation based on a small number of carefully chosen density evaluations can significantly reduce the computational expense of Markov chain Monte Carlo (MCMC) sampling. Moreover, continual refinement of the local approximation can guarantee asymptotically exact sampling. We devise a new strategy for balancing the decay rate of the bias due to the approximation with that of the MCMC variance. We prove that the error of the resulting local approximation MCMC (LA-MCMC) algorithm decays at roughly the expected rate, and we demonstrate this rate numerically. We also introduce an algorithmic parameter that guarantees convergence given very weak tail bounds, significantly strengthening previous convergence results. Finally, we apply LA-MCMC to a computationally intensive Bayesian inverse problem arising in groundwater hydrology.
Recommendations
- Parallel local approximation MCMC for expensive models
- Localization for MCMC: sampling high-dimensional posterior distributions with local structure
- Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors
- Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions
- Dimension-independent likelihood-informed MCMC
Cites work
- A Hierarchical Multilevel Markov Chain Monte Carlo Algorithm with Applications to Uncertainty Quantification in Subsurface Flow
- A stochastic collocation approach to Bayesian inference in inverse problems
- Accelerating Markov chain Monte Carlo with active subspaces
- Adaptive construction of surrogates for the Bayesian solution of inverse problems
- Adaptive sparse polynomial chaos expansion based on least angle regression
- An adaptive Metropolis algorithm
- Approximation of Bayesian Inverse Problems for PDEs
- Bayesian solution uncertainty quantification for differential equations
- Bayesian static parameter estimation for partially observed diffusions via multilevel Monte Carlo
- Certified dimension reduction in nonlinear Bayesian inverse problems
- Consistent nonparametric regression. Discussion
- Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms
- Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms
- Introduction to Derivative-Free Optimization
- Likelihood-informed dimension reduction for nonlinear inverse problems
- Markov chains and stochastic stability
- Monte Carlo errors with less errors
- Parallel local approximation MCMC for expensive models
- Perturbation bounds for Monte Carlo within metropolis via restricted approximations
- Perturbation theory for Markov chains via Wasserstein distance
- Posterior consistency for Gaussian process approximations of Bayesian posterior distributions
- Quasi-Monte-Carlo methods and the dispersion of point sequences
- Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction
- Sparsity in Bayesian inversion of parametric operator equations
- Statistical and computational inverse problems.
- Statistical inverse problems: discretization, model reduction and inverse crimes
- The containment condition and AdapFail algorithms
- Universal consistency of local polynomial kernel regression estimates
- X-TMCMC: adaptive kriging for Bayesian inverse modeling
Cited in
(2)
This page was built for publication: Rate-optimal refinement strategies for local approximation MCMC
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2172107)