Neural network training using _1-regularization and bi-fidelity data
From MaRDI portal
Publication:2138992
Recommendations
- On robust training of regression neural networks
- Combining neural networks for function approximation under conditions of sparse data: the biased regression approach
- Improving neural network training solutions using regularisation
- Regularisation of neural networks by enforcing Lipschitz continuity
- Training neural networks with noisy data as an ill-posed problem
- Regularized greedy algorithms for network training with data noise
Cites work
- scientific article; zbMATH DE number 6378127 (Why is no real title available?)
- scientific article; zbMATH DE number 7626756 (Why is no real title available?)
- A MULTI-FIDELITY NEURAL NETWORK SURROGATE SAMPLING METHOD FOR UNCERTAINTY QUANTIFICATION
- A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data
- A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data
- A composite neural network that learns from multi-fidelity data: application to function approximation and inverse PDE problems
- A non-adapted sparse approximation of PDEs with stochastic inputs
- A stochastic projection method for fluid flow. II: Random process
- A survey of projection-based model reduction methods for parametric dynamical systems
- A weighted \(\ell_1\)-minimization approach for sparse polynomial chaos expansions
- Accurate solutions to the square thermally driven cavity at high Rayleigh number
- Advanced Lectures on Machine Learning
- Automated solution of differential equations by the finite element method. The FEniCS book
- Bi-fidelity stochastic gradient descent for structural optimization under uncertainty
- Compressed sensing
- Data-driven prediction of unsteady flow over a circular cylinder using deep learning
- Deep learning
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Error bounds for approximations with deep ReLU networks
- Gaussian processes for machine learning.
- Hidden physics models: machine learning of nonlinear partial differential equations
- Multi-fidelity optimization via surrogate modelling
- ON TRANSFER LEARNING OF NEURAL NETWORKS USING BI-FIDELITY DATA FOR UNCERTAINTY PROPAGATION
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Practical error bounds for a non-intrusive bi-fidelity approach to parametric/stochastic model reduction
- Prediction of aerodynamic flow fields using convolutional neural networks
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- Reducing the Dimensionality of Data with Neural Networks
- The Adaptive Lasso and Its Oracle Properties
- The Wiener--Askey Polynomial Chaos for Stochastic Differential Equations
- The gap between theory and practice in function approximation with deep neural networks
- Uncertainty analysis for the steady-state flows in a dual throat nozzle
- Understanding machine learning. From theory to algorithms
Cited in
(7)- On sparse regression, \(L_p\)-regularization, and automated model discovery
- Multifidelity deep operator networks for data-driven and physics-informed problems
- A Neighborhood-Based Enhancement of the Gauss-Newton Bayesian Regularization Training Method
- Bi-fidelity modeling of uncertain and partially unknown systems using DeepONets
- Bi-fidelity variational auto-encoder for uncertainty quantification
- Combining neural networks for function approximation under conditions of sparse data: the biased regression approach
- A multifidelity machine learning based semi-Lagrangian finite volume scheme for linear transport equations and the nonlinear Vlasov-Poisson system
This page was built for publication: Neural network training using \(\ell_1\)-regularization and bi-fidelity data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2138992)