Scaling Up Bayesian Uncertainty Quantification for Inverse Problems Using Deep Neural Networks
DOI10.1137/21m1439456zbMath1514.62009arXiv2101.03906MaRDI QIDQ6109143
Shuyi Li, Babak Shahbaba, Shiwei Lan
Publication date: 30 June 2023
Published in: SIAM/ASA Journal on Uncertainty Quantification (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2101.03906
autoencoderdimension reductionemulationBayesian inverse problemsconvolutional neural networkensemble Kalman methods
Computational methods for problems pertaining to statistics (62-08) Artificial neural networks and deep learning (68T07) Probabilistic methods, particle methods, etc. for initial value and initial-boundary value problems involving PDEs (65M75)
Related Items (2)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Hybrid Monte Carlo on Hilbert spaces
- Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian inverse problems
- Neocognition: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- The design and analysis of computer experiments.
- Geometric MCMC for infinite-dimensional inverse problems
- Investigation of the sampling performance of ensemble-based methods with a simple reservoir model
- Analysis of iterative ensemble smoothers for solving inverse problems
- Bayesian learning for neural networks
- Accelerated information gradient flow
- Adaptive dimension reduction to accelerate infinite-dimensional geometric Markov chain Monte Carlo
- Proposals which speed up function-space MCMC
- Universality of deep convolutional neural networks
- Dimension-independent likelihood-informed MCMC
- Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions
- A stable manifold MCMC method for high dimensions
- Bayesian Calibration of Computer Models
- A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
- Accelerating Markov Chain Monte Carlo with Active Subspaces
- Ensemble Kalman methods for inverse problems
- Inverse problems: A Bayesian perspective
- Reducing the Dimensionality of Data with Neural Networks
- Long-Time Stability and Accuracy of the Ensemble Kalman--Bucy Filter for Fully Observed Processes and Small Measurement Noise
- Learning Deep Architectures for AI
- MCMC METHODS FOR DIFFUSION BRIDGES
- The Variational Gaussian Approximation Revisited
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- Deep Convolutional Neural Network for Inverse Problems in Imaging
- Convergence analysis of ensemble Kalman inversion: the linear, noisy case
- Combining Field Data and Computer Simulations for Calibration and Prediction
- Probabilistic Sensitivity Analysis of Complex Models: A Bayesian Approach
- Affine Invariant Interacting Langevin Dynamics for Bayesian Inference
- Multi-Resolution Filters for Massive Spatio-Temporal Data
- Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler
- Ensemble Kalman Methods for High-Dimensional Hierarchical Dynamic Space-Time Models
- An Introduction to Variational Autoencoders
- Data Assimilation
- Analysis of the Ensemble Kalman Filter for Inverse Problems
- Learning representations by back-propagating errors
- A Fast Learning Algorithm for Deep Belief Nets
- Approximation by superpositions of a sigmoidal function
- MCMC methods for functions: modifying old algorithms to make them faster
This page was built for publication: Scaling Up Bayesian Uncertainty Quantification for Inverse Problems Using Deep Neural Networks