Bi-fidelity variational auto-encoder for uncertainty quantification
From MaRDI portal
Publication:6202982
DOI10.1016/J.CMA.2024.116793arXiv2305.16530OpenAlexW4391797872MaRDI QIDQ6202982FDOQ6202982
Authors: Nuojin Cheng, Osman Asif Malik, Subhayan De, S. Becker, Alireza Doostan
Publication date: 26 March 2024
Published in: Computer Methods in Applied Mechanics and Engineering (Search for Journal in Brave)
Abstract: Quantifying the uncertainty of quantities of interest (QoIs) from physical systems is a primary objective in model validation. However, achieving this goal entails balancing the need for computational efficiency with the requirement for numerical accuracy. To address this trade-off, we propose a novel bi-fidelity formulation of variational auto-encoders (BF-VAE) designed to estimate the uncertainty associated with a QoI from low-fidelity (LF) and high-fidelity (HF) samples of the QoI. This model allows for the approximation of the statistics of the HF QoI by leveraging information derived from its LF counterpart. Specifically, we design a bi-fidelity auto-regressive model in the latent space that is integrated within the VAE's probabilistic encoder-decoder structure. An effective algorithm is proposed to maximize the variational lower bound of the HF log-likelihood in the presence of limited HF data, resulting in the synthesis of HF realizations with a reduced computational cost. Additionally, we introduce the concept of the bi-fidelity information bottleneck (BF-IB) to provide an information-theoretic interpretation of the proposed BF-VAE model. Our numerical results demonstrate that BF-VAE leads to considerably improved accuracy, as compared to a VAE trained using only HF data when limited HF data is available.
Full work available at URL: https://arxiv.org/abs/2305.16530
uncertainty quantificationtransfer learningmulti-fidelitygenerative modelingvariational auto-encoder
Cites Work
- Bayesian calibration of computer models. (With discussion)
- Gaussian processes for machine learning.
- A kernel two-sample test
- Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
- Title not available (Why is that?)
- Predicting the output from a complex computer code when fast approximations are available
- Certified reduced basis methods for parametrized partial differential equations
- A weighted \(\ell_1\)-minimization approach for sparse polynomial chaos expansions
- Multi-output local Gaussian process regression: applications to uncertainty quantification
- Adaptive multi-fidelity polynomial chaos approach to Bayesian inference in inverse problems
- Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies
- Accurate uncertainty quantification using inaccurate computational models
- Multi-fidelity non-intrusive polynomial chaos based on regression
- Kolmogorov widths and low-rank approximations of parametric elliptic PDEs
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- A low-rank control variate for multilevel Monte Carlo simulation of high-dimensional uncertain systems
- Coherence motivated sampling and convergence analysis of least squares polynomial chaos regression
- Basis adaptive sample efficient polynomial chaos (BASE-PC)
- Practical error bounds for a non-intrusive bi-fidelity approach to parametric/stochastic model reduction
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- A generalized probabilistic learning approach for multi-fidelity uncertainty quantification in complex physical simulations
- Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks
- Solving inverse problems using conditional invertible neural networks
- A generalized approximate control variate framework for multifidelity uncertainty quantification
- A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder
- ON TRANSFER LEARNING OF NEURAL NETWORKS USING BI-FIDELITY DATA FOR UNCERTAINTY PROPAGATION
- KERNEL OPTIMIZATION FOR LOW-RANK MULTIFIDELITY ALGORITHMS
- Neural network training using \(\ell_1\)-regularization and bi-fidelity data
This page was built for publication: Bi-fidelity variational auto-encoder for uncertainty quantification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6202982)