Bi-fidelity variational auto-encoder for uncertainty quantification
From MaRDI portal
Publication:6202982
Abstract: Quantifying the uncertainty of quantities of interest (QoIs) from physical systems is a primary objective in model validation. However, achieving this goal entails balancing the need for computational efficiency with the requirement for numerical accuracy. To address this trade-off, we propose a novel bi-fidelity formulation of variational auto-encoders (BF-VAE) designed to estimate the uncertainty associated with a QoI from low-fidelity (LF) and high-fidelity (HF) samples of the QoI. This model allows for the approximation of the statistics of the HF QoI by leveraging information derived from its LF counterpart. Specifically, we design a bi-fidelity auto-regressive model in the latent space that is integrated within the VAE's probabilistic encoder-decoder structure. An effective algorithm is proposed to maximize the variational lower bound of the HF log-likelihood in the presence of limited HF data, resulting in the synthesis of HF realizations with a reduced computational cost. Additionally, we introduce the concept of the bi-fidelity information bottleneck (BF-IB) to provide an information-theoretic interpretation of the proposed BF-VAE model. Our numerical results demonstrate that BF-VAE leads to considerably improved accuracy, as compared to a VAE trained using only HF data when limited HF data is available.
Recommendations
- Physics-informed variational inference for uncertainty quantification of stochastic differential equations
- Bi-fidelity modeling of uncertain and partially unknown systems using DeepONets
- A generalized probabilistic learning approach for multi-fidelity uncertainty quantification in complex physical simulations
- Efficient Bayesian physics informed neural networks for inverse problems via ensemble Kalman inversion
- Adversarial uncertainty quantification in physics-informed neural networks
Cites work
- scientific article; zbMATH DE number 7646020 (Why is no real title available?)
- A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder
- A generalized approximate control variate framework for multifidelity uncertainty quantification
- A generalized probabilistic learning approach for multi-fidelity uncertainty quantification in complex physical simulations
- A kernel two-sample test
- A low-rank control variate for multilevel Monte Carlo simulation of high-dimensional uncertain systems
- A weighted \(\ell_1\)-minimization approach for sparse polynomial chaos expansions
- Accurate uncertainty quantification using inaccurate computational models
- Adaptive multi-fidelity polynomial chaos approach to Bayesian inference in inverse problems
- Basis adaptive sample efficient polynomial chaos (BASE-PC)
- Bayesian calibration of computer models. (With discussion)
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- Certified reduced basis methods for parametrized partial differential equations
- Coherence motivated sampling and convergence analysis of least squares polynomial chaos regression
- Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Gaussian processes for machine learning.
- KERNEL OPTIMIZATION FOR LOW-RANK MULTIFIDELITY ALGORITHMS
- Kolmogorov widths and low-rank approximations of parametric elliptic PDEs
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks
- Multi-fidelity non-intrusive polynomial chaos based on regression
- Multi-output local Gaussian process regression: applications to uncertainty quantification
- Neural network training using \(\ell_1\)-regularization and bi-fidelity data
- ON TRANSFER LEARNING OF NEURAL NETWORKS USING BI-FIDELITY DATA FOR UNCERTAINTY PROPAGATION
- Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
- Practical error bounds for a non-intrusive bi-fidelity approach to parametric/stochastic model reduction
- Predicting the output from a complex computer code when fast approximations are available
- Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
- Solving inverse problems using conditional invertible neural networks
This page was built for publication: Bi-fidelity variational auto-encoder for uncertainty quantification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6202982)