Certified dimension reduction in nonlinear Bayesian inverse problems
From MaRDI portal
Publication:5082037
DOI10.1090/mcom/3737OpenAlexW2848712137WikidataQ114849153 ScholiaQ114849153MaRDI QIDQ5082037
Olivier Zahm, Alessio Spantini, Tiangang Cui, Youssef M. Marzouk, Kody J. H. Law
Publication date: 15 June 2022
Published in: Mathematics of Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1807.03712
Numerical methods (including Monte Carlo methods) (91G60) Bayesian problems; characterization of Bayes procedures (62C10) Research exposition (monographs, survey articles) pertaining to approximations and expansions (41-02)
Related Items (16)
Projected Wasserstein Gradient Descent for High-Dimensional Bayesian Inference ⋮ Prior normalization for certified likelihood-informed subspace detection of Bayesian inverse problems ⋮ Rate-optimal refinement strategies for local approximation MCMC ⋮ Generalized bounds for active subspaces ⋮ A unified performance analysis of likelihood-informed subspace methods ⋮ Model Reduction for Nonlinear Systems by Balanced Truncation of State and Gradient Covariance ⋮ Multifidelity Dimension Reduction via Active Subspaces ⋮ Certified Dimension Reduction for Bayesian Updating with the Cross-Entropy Method ⋮ A greedy sensor selection algorithm for hyperparameterized linear Bayesian inverse problems with correlated noise models ⋮ Large-scale Bayesian optimal experimental design with derivative-informed projected neural network ⋮ Scalable Optimization-Based Sampling on Function Space ⋮ Large Deviation Theory-based Adaptive Importance Sampling for Rare Events in High Dimensions ⋮ Deep Importance Sampling Using Tensor Trains with Application to a Priori and a Posteriori Rare Events ⋮ Multilevel dimension-independent likelihood-informed MCMC for large-scale inverse problems ⋮ Bayesian inference of random fields represented with the Karhunen-Loève expansion ⋮ Gradient-Based Dimension Reduction of Multivariate Vector-Valued Functions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An Explicit Link between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach
- The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo
- Characterization of Talagrand's transport-entropy inequalities in metric spaces
- Learning functions of few arbitrary linear parameters in high dimensions
- Learning non-parametric basis independent models from point queries via low-rank methods
- A note on the Karhunen-Loève expansions for infinite-dimensional Bayesian inverse problems
- Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems
- Lower bounds for eigenvalues of regular Sturm-Liouville operators and the logarithmic Sobolev inequality
- Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality
- From Brunn-Minkowski to Brascamp-Lieb and to logarithmic Sobolev inequalities
- Fast model updating coupling Bayesian inference and PGD model reduction
- Importance sampling: intrinsic dimension and computational cost
- Statistical and computational inverse problems.
- Logarithmic Sobolev inequalities and stochastic Ising models
- On the convergence of the Laplace approximation and noise-level-robustness of Laplace-based Monte Carlo methods for Bayesian inverse problems
- Nonasymptotic upper bounds for the reconstruction error of PCA
- Dimension-independent likelihood-informed MCMC
- Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction
- Statistical properties of kernel principal component analysis
- Capturing ridge functions in high dimensions from point queries
- An adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift
- Concentration Inequalities
- Accelerating Markov Chain Monte Carlo with Active Subspaces
- Accelerated Dimension-Independent Adaptive Metropolis
- Trace optimization and eigenproblems in dimension reduction methods
- Inverse problems: A Bayesian perspective
- Likelihood-informed dimension reduction for nonlinear inverse problems
- Data-driven model reduction for the Bayesian solution of inverse problems
- Parameter and State Model Reduction for Large-Scale Statistical Inverse Problems
- Fast Algorithms for Bayesian Uncertainty Quantification in Large-Scale Linear Inverse Problems Based on Low-Rank Partial Hessian Approximations
- Multilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed Proposals
- Sequential Monte Carlo Samplers
- Optimal Low-rank Approximations of Bayesian Linear Inverse Problems
- On the Optimality of Conditional Expectation as a Bregman Predictor
- Logarithmic Sobolev Inequalities
- Foundations of Modern Probability
- Séminaire de Probabilités XXXVI
- Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods
- Markov Chain Monte Carlo Methods for High Dimensional Inversion in Remote Sensing
- Analysis and Geometry of Markov Diffusion Operators
- Gradient-Based Dimension Reduction of Multivariate Vector-Valued Functions
- Ridge Functions
- Accurate Solution of Bayesian Inverse Uncertainty Quantification Problems Combining Reduced Basis Methods and Reduction Error Models
- Data-free likelihood-informed dimension reduction of Bayesian inverse problems
- An adaptive Metropolis algorithm
This page was built for publication: Certified dimension reduction in nonlinear Bayesian inverse problems