An Adaptive Surrogate Modeling Based on Deep Neural Networks for Large-Scale Bayesian Inverse Problems
From MaRDI portal
Publication:5162376
DOI10.4208/cicp.OA-2020-0186zbMath1482.65206arXiv1911.08926OpenAlexW3103198077MaRDI QIDQ5162376
Publication date: 2 November 2021
Published in: Communications in Computational Physics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1911.08926
Markov chain Monte CarloBayesian inverse problemsdeep neural networksmulti-fidelity surrogate modeling
Bayesian inference (62F15) Monte Carlo methods (65C05) Inverse problems for PDEs (35R30) Numerical methods for inverse problems for boundary value problems involving PDEs (65N21)
Related Items
Calibrate, emulate, sample, Multi-fidelity Bayesian neural networks: algorithms and applications, Randomized approaches to accelerate MCMC algorithms for Bayesian inverse problems, Image inversion and uncertainty quantification for constitutive laws of pattern formation, Bayesian inversion using adaptive polynomial chaos kriging within subset simulation, Surrogate and inverse modeling for two-phase flow in porous media via theory-guided convolutional neural network, An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks, Ensemble Inference Methods for Models With Noisy and Expensive Likelihoods, MFNets: data efficient all-at-once learning of multifidelity surrogates as directed networks of information sources, A Bayesian scheme for reconstructing obstacles in acoustic waveguides, Adaptive weighting of Bayesian physics informed neural networks for multitask and multiscale forward and inverse problems, Probabilistic neural data fusion for learning from an arbitrary number of multi-fidelity data sets, An Adaptive Non-Intrusive Multi-Fidelity Reduced Basis Method for Parameterized Partial Differential Equations, Adaptive Ensemble Kalman Inversion with Statistical Linearization, Surrogate modeling for Bayesian inverse problems based on physics-informed neural networks, Sequential Model Correction for Nonlinear Inverse Problems, Residual-based error correction for neural operator accelerated Infinite-dimensional Bayesian inverse problems, Scalable conditional deep inverse Rosenblatt transports using tensor trains and gradient-based dimension reduction, Deep Importance Sampling Using Tensor Trains with Application to a Priori and a Posteriori Rare Events, Efficient multifidelity likelihood-free Bayesian inference with adaptive computational resource allocation, Gradient-free Stein variational gradient descent with kernel approximation, Stein variational gradient descent with local approximations, Cholesky-Based Experimental Design for Gaussian Process and Kernel-Based Emulation and Calibration, Goal-oriented a-posteriori estimation of model error as an aid to parameter estimation, Multifidelity data fusion in convolutional encoder/decoder networks, Variable-order approach to nonlocal elasticity: theoretical formulation, order identification via deep learning, and applications
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
- Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems
- Stochastic spectral methods for efficient Bayesian solution of inverse problems
- Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- Statistical and computational inverse problems.
- Bifidelity data-assisted neural networks in nonintrusive reduced-order modeling
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Adaptive multi-fidelity polynomial chaos approach to Bayesian inference in inverse problems
- A composite neural network that learns from multi-fidelity data: application to function approximation and inverse PDE problems
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Bayesian Calibration of Computer Models
- Ensemble Kalman methods for inverse problems
- Adaptive Construction of Surrogates for the Bayesian Solution of Inverse Problems
- Inverse problems: A Bayesian perspective
- Randomize-Then-Optimize: A Method for Sampling from Posterior Distributions in Nonlinear Inverse Problems
- Parameter and State Model Reduction for Large-Scale Statistical Inverse Problems
- Large‐Scale Inverse Problems and Quantification of Uncertainty
- Inverse problems as statistics
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Fast Bayesian approach for parameter estimation
- Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
- Posterior consistency for Gaussian process approximations of Bayesian posterior distributions
- Convergence analysis of surrogate-based methods for Bayesian inverse problems
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ
- Solving high-dimensional partial differential equations using deep learning
- AN ADAPTIVE MULTIFIDELITY PC-BASED ENSEMBLE KALMAN INVERSION FOR INVERSE PROBLEMS
- Stochastic Collocation Algorithms Using $l_1$-Minimization for Bayesian Solution of Inverse Problems
- Non‐linear model reduction for uncertainty quantification in large‐scale inverse problems
- Bayesian Inverse Problems with $l_1$ Priors: A Randomize-Then-Optimize Approach
- Sequential Monte Carlo methods for Bayesian elliptic inverse problems