An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks
From MaRDI portal
Publication:5079536
Abstract: Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO may be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel strategy to substantially reduce the computation burden of RTO by using a goal-oriented deep neural networks (DNN) surrogate approach. In particular, the training points for the DNN-surrogate are drawn from a local approximated posterior distribution, and it is shown that the resulting algorithm can provide a flexible and efficient sampling algorithm, which converges to the direct RTO approach. We present a Bayesian inverse problem governed by a benchmark elliptic PDE to demonstrate the computational accuracy and efficiency of our new algorithm (i.e., DNN-RTO). It is shown that with our algorithm, one can significantly outperform the traditional RTO.
Recommendations
- Neural networks-based variationally enhanced sampling
- Accelerating deep neural network training with inconsistent stochastic gradient descent
- Random neural network methods and deep learning
- Quasi-Random Sampling for Multivariate Distributions via Generative Neural Networks
- A random energy approach to deep learning
- CAS4DL: Christoffel adaptive sampling for function approximation via deep learning
- A Deep Generative Approach to Conditional Sampling
- Quasi-Monte Carlo sampling for solving partial differential equations by deep neural networks
Cites work
- A random map implementation of implicit filters
- A stochastic Newton MCMC method for large-scale statistical inverse problems with application to seismic inversion
- A stochastic collocation approach to Bayesian inference in inverse problems
- AN ADAPTIVE MULTIFIDELITY PC-BASED ENSEMBLE KALMAN INVERSION FOR INVERSE PROBLEMS
- Adaptive multi-fidelity polynomial chaos approach to Bayesian inference in inverse problems
- An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems
- Approximation errors and model reduction with an application in optical diffusion tomography
- Bayesian calibration of computer models. (With discussion)
- Bayesian inverse problems with \(l_1\) priors: a randomize-then-optimize approach
- Convergence analysis of surrogate-based methods for Bayesian inverse problems
- Data-driven model reduction for the Bayesian solution of inverse problems
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Deep learning
- Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems
- Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian inverse problems
- Geometric MCMC for infinite-dimensional inverse problems
- Handbook of Markov Chain Monte Carlo
- Inverse problems: a Bayesian perspective
- Large-scale machine learning with stochastic gradient descent
- Nonlinear model reduction for uncertainty quantification in large-scale inverse problems
- Parameter and state model reduction for large-scale statistical inverse problems
- Pattern recognition and machine learning.
- Posterior consistency for Gaussian process approximations of Bayesian posterior distributions
- Randomize-then-optimize: a method for sampling from posterior distributions in nonlinear inverse problems
- Riemann manifold Langevin and Hamiltonian Monte Carlo methods. With discussion and authors' reply
- Statistical and computational inverse problems.
- Stochastic collocation algorithms using \(l_1\)-minimization for Bayesian solution of inverse problems
- Stochastic spectral methods for efficient Bayesian solution of inverse problems
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
Cited in
(5)- Local antithetic sampling with scrambled nets
- Surrogate modeling for Bayesian inverse problems based on physics-informed neural networks
- Quasi-Random Sampling for Multivariate Distributions via Generative Neural Networks
- Adaptive operator learning for infinite-dimensional Bayesian inverse problems
- Scalable Optimization-Based Sampling on Function Space
This page was built for publication: An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5079536)