Bayesian learning of orthogonal embeddings for multi-fidelity Gaussian processes
DOI10.1016/j.cma.2021.114147OpenAlexW3047495759MaRDI QIDQ2246340
Valeria Andreoli, Panagiotis Tsilifis, Sayan Ghosh, Piyush Pandita, Thomas Vandeputte, Li-Ping Wang
Publication date: 16 November 2021
Published in: Computer Methods in Applied Mechanics and Engineering (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2008.02386
dimension reductionBayesian inferenceGaussian process regressionuncertainty propagationgeodesic Monte Carlomulti-fidelity simulations
Nonparametric regression and quantile regression (62G08) Gaussian processes (60G15) Bayesian inference (62F15) Monte Carlo methods (65C05)
Related Items (3)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multi-output separable Gaussian process: towards an efficient, fully Bayesian paradigm for uncertainty quantification
- Multi-output local Gaussian process regression: applications to uncertainty quantification
- Kernel principal component analysis for stochastic input model generation
- Gaussian processes with built-in dimensionality reduction: applications to high-dimensional uncertainty propagation
- Uncertainty propagation using infinite mixture of gaussian processes and variational Bayesian inference
- Sparse polynomial chaos expansions using variational relevance vector machines
- Bayes-Hermite quadrature
- Reduced Wiener chaos representation of random fields via basis adaptation and projection
- Machine learning of linear differential equations using Gaussian processes
- Discovering variable fractional orders of advection-dispersion equations from field data using multi-fidelity Bayesian optimization
- Support-vector networks
- Quasi-Monte Carlo integration
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Compressive sensing adaptation for polynomial chaos expansions
- Structured Bayesian Gaussian process latent variable model: applications to data-driven dimensionality reduction and high-dimensional inversion
- Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
- Surrogate-based sequential Bayesian experimental design using non-stationary Gaussian processes
- Bayesian Calibration of Computer Models
- Geodesic Monte Carlo on Embedded Manifolds
- Active Subspaces
- Numerical Gaussian Processes for Time-Dependent and Nonlinear Partial Differential Equations
- Multi-fidelity optimization via surrogate modelling
- Spectral Methods for Uncertainty Quantification
- Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
- Bayesian adaptation of chaos representations using variational inference and sampling on geodesics
- Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods
- Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling
- Inverse Problem Theory and Methods for Model Parameter Estimation
- The Wiener--Askey Polynomial Chaos for Stochastic Differential Equations
- A Limited Memory Algorithm for Bound Constrained Optimization
- Multidimensional Adaptive Relevance Vector Machines for Uncertainty Quantification
- Predicting the output from a complex computer code when fast approximations are available
- RECURSIVE CO-KRIGING MODEL FOR DESIGN OF COMPUTER EXPERIMENTS WITH MULTIPLE LEVELS OF FIDELITY
- Multifidelity Dimension Reduction via Active Subspaces
- Statistics for Spatial Data
- Bayesian Analysis of Hierarchical Multifidelity Codes
- Uncertainty propagation in CFD using polynomial chaos decomposition
- Geometric Numerical Integration
- Multifidelity Information Fusion Algorithms for High-Dimensional Systems and Massive Data sets
This page was built for publication: Bayesian learning of orthogonal embeddings for multi-fidelity Gaussian processes