Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
DOI10.1016/j.jcp.2019.05.024zbMath1452.68172arXiv1901.06314OpenAlexW2908541468WikidataQ127843943 ScholiaQ127843943MaRDI QIDQ2222275
Phaedon-Stelios Koutsourelakis, Yinhao Zhu, Paris Perdikaris, Nicholas Zabaras
Publication date: 26 January 2021
Published in: Journal of Computational Physics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1901.06314
uncertainty quantificationsurrogate modelingnormalizing flowconditional generative modelphysics-constrainedreverse Kullback-Leibler divergence
Probabilistic methods, particle methods, etc. for boundary value problems involving PDEs (65N75) Artificial neural networks and deep learning (68T07) Flows in porous media; filtration; seepage (76S05) Dependence of solutions to PDEs on initial and/or boundary data and/or on parameters of PDEs (35B30)
Related Items (only showing first 100 items - show all)
Uses Software
Cites Work
- Multi-output separable Gaussian process: towards an efficient, fully Bayesian paradigm for uncertainty quantification
- Mixed finite element methods for elliptic problems
- Variational principles for nonpotential operators
- The numerical solution of linear ordinary differential equations by feedforward neural networks
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- DGM: a deep learning algorithm for solving partial differential equations
- Computing stationary solutions of the two-dimensional Gross-Pitaevskii equation with deflated continuation
- Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks
- Structured Bayesian Gaussian process latent variable model: applications to data-driven dimensionality reduction and high-dimensional inversion
- Adversarial uncertainty quantification in physics-informed neural networks
- Deep multiscale model learning
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Nonlinear corrections to Darcy's law at low Reynolds numbers
- Solving ill-posed inverse problems using iterative deep neural networks
- Predicting the output from a complex computer code when fast approximations are available
- Solving high-dimensional partial differential equations using deep learning
- Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations
- A Multiscale Neural Network Based on Hierarchical Matrices
- Bayesian Model and Dimension Reduction for Uncertainty Propagation: Applications in Random Media
- Bayesian Probabilistic Numerical Methods
- Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
- Probabilistic numerics and uncertainty in computations
- Deflation Techniques for Finding Distinct Solutions of Nonlinear Partial Differential Equations
- The effect of weak inertia on flow through a porous medium
- A Proof that Artificial Neural Networks Overcome the Curse of Dimensionality in the Numerical Approximation of Black–Scholes Partial Differential Equations
This page was built for publication: Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data