Scalable Optimization-Based Sampling on Function Space
From MaRDI portal
Publication:5112552
DOI10.1137/19M1245220zbMath1471.62023arXiv1903.00870OpenAlexW3019869015MaRDI QIDQ5112552
Johnathan M. Bardsley, Tiangang Cui, Zheng Wang, Youssef M. Marzouk
Publication date: 29 May 2020
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.00870
Markov chain Monte CarloBayesian inferenceinfinite-dimensional inverse problemstransport mapsMetropolis independence sampling
Computational methods for problems pertaining to statistics (62-08) Bayesian inference (62F15) Monte Carlo methods (65C05)
Related Items (7)
Learning physics-based models from data: perspectives from inverse problems and model reduction ⋮ Randomized maximum likelihood based posterior sampling ⋮ Numerical algorithms for spline interpolation on space of probability density functions ⋮ MALA-within-Gibbs Samplers for High-Dimensional Distributions with Sparse Conditional Structure ⋮ Optimization-Based Markov Chain Monte Carlo Methods for Nonlinear Hierarchical Statistical Inverse Problems ⋮ Variational inference for nonlinear inverse problems via neural net kernels: comparison to Bayesian neural networks, application to topology optimization ⋮ Optimal experimental design for infinite-dimensional Bayesian inverse problems governed by PDEs: a review
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An Explicit Link between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach
- Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
- A random map implementation of implicit filters
- Diffusion limits of the random walk Metropolis algorithm in high dimensions
- Implicit particle filters for data assimilation
- A note on Metropolis-Hastings kernels for general state spaces
- On the convergence of interior-reflective Newton methods for nonlinear minimization subject to bounds
- Exponential convergence of Langevin distributions and their discrete approximations
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Geometric MCMC for infinite-dimensional inverse problems
- Importance sampling: intrinsic dimension and computational cost
- On a generalization of the preconditioned Crank-Nicolson metropolis algorithm
- Statistical and computational inverse problems.
- Rates of convergence of the Hastings and Metropolis algorithms
- Dimension-independent likelihood-informed MCMC
- Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction
- An introduction to infinite-dimensional analysis
- Optimal Model Management for Multifidelity Monte Carlo Estimation
- Complexity analysis of accelerated MCMC methods for Bayesian inversion
- Inverse problems: A Bayesian perspective
- A Stochastic Newton MCMC Method for Large-Scale Statistical Inverse Problems with Application to Seismic Inversion
- Likelihood-informed dimension reduction for nonlinear inverse problems
- Fast Algorithms for Bayesian Uncertainty Quantification in Large-Scale Linear Inverse Problems Based on Low-Rank Partial Hessian Approximations
- A Randomized Maximum A Posteriori Method for Posterior Sampling of High Dimensional Nonlinear Bayesian Inverse Problems
- Transport Map Accelerated Markov Chain Monte Carlo
- Multilevel Monte Carlo Path Simulation
- Optimal Low-rank Approximations of Bayesian Linear Inverse Problems
- A Hierarchical Multilevel Markov Chain Monte Carlo Algorithm with Applications to Uncertainty Quantification in Subsurface Flow
- MCMC METHODS FOR DIFFUSION BRIDGES
- Handbook of Markov Chain Monte Carlo
- Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods
- Parallel Local Approximation MCMC for Expensive Models
- An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds
- Certified dimension reduction in nonlinear Bayesian inverse problems
- Metropolized Randomized Maximum Likelihood for Improved Sampling from Multimodal Distributions
- Bayesian Inverse Problems with $l_1$ Priors: A Randomize-Then-Optimize Approach
- A Computational Framework for Infinite-Dimensional Bayesian Inverse Problems Part I: The Linearized Case, with Application to Global Seismic Inversion
- MCMC methods for functions: modifying old algorithms to make them faster
This page was built for publication: Scalable Optimization-Based Sampling on Function Space