An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks
DOI10.4208/JCM.2102-M2020-0339zbMATH Open1499.62346arXiv2104.06285OpenAlexW3209989675MaRDI QIDQ5079536FDOQ5079536
Publication date: 27 May 2022
Published in: Journal of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.06285
Recommendations
- Neural networks-based variationally enhanced sampling
- Accelerating deep neural network training with inconsistent stochastic gradient descent
- Random neural network methods and deep learning
- Quasi-Random Sampling for Multivariate Distributions via Generative Neural Networks
- A random energy approach to deep learning
- CAS4DL: Christoffel adaptive sampling for function approximation via deep learning
- A Deep Generative Approach to Conditional Sampling
- Quasi-Monte Carlo sampling for solving partial differential equations by deep neural networks
Monte Carlo methods (65C05) Artificial neural networks and deep learning (68T07) Neural nets and related approaches to inference from stochastic processes (62M45) Probabilistic models, generic numerical methods in probability and statistics (65C20) Inverse problems for PDEs (35R30)
Cites Work
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
- Bayesian calibration of computer models. (With discussion)
- Pattern recognition and machine learning.
- Statistical and computational inverse problems.
- Handbook of Markov Chain Monte Carlo
- A random map implementation of implicit filters
- Deep learning
- Riemann manifold Langevin and Hamiltonian Monte Carlo methods. With discussion and authors' reply
- Inverse problems: a Bayesian perspective
- Parameter and state model reduction for large-scale statistical inverse problems
- Approximation errors and model reduction with an application in optical diffusion tomography
- A stochastic Newton MCMC method for large-scale statistical inverse problems with application to seismic inversion
- Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems
- A stochastic collocation approach to Bayesian inference in inverse problems
- Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian inverse problems
- Adaptive multi-fidelity polynomial chaos approach to Bayesian inference in inverse problems
- Nonlinear model reduction for uncertainty quantification in large-scale inverse problems
- Stochastic spectral methods for efficient Bayesian solution of inverse problems
- Large-scale machine learning with stochastic gradient descent
- Data-driven model reduction for the Bayesian solution of inverse problems
- Stochastic collocation algorithms using \(l_1\)-minimization for Bayesian solution of inverse problems
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Posterior consistency for Gaussian process approximations of Bayesian posterior distributions
- Convergence analysis of surrogate-based methods for Bayesian inverse problems
- Geometric MCMC for infinite-dimensional inverse problems
- Title not available (Why is that?)
- AN ADAPTIVE MULTIFIDELITY PC-BASED ENSEMBLE KALMAN INVERSION FOR INVERSE PROBLEMS
- An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems
- Bayesian inverse problems with \(l_1\) priors: a randomize-then-optimize approach
Cited In (4)
Uses Software
This page was built for publication: An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5079536)