The Random Feature Model for Input-Output Maps between Banach Spaces
DOI10.1137/20M133957XMaRDI QIDQ3382802FDOQ3382802
Authors: Nicholas H. Nelsen, A. M. Stuart
Publication date: 22 September 2021
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2005.10224
Recommendations
- Regularized learning schemes in feature Banach spaces
- Learning Theory
- Reproducing kernel Banach spaces for machine learning
- Banach space representer theorems for neural networks and ridge splines
- scientific article; zbMATH DE number 1283993
- scientific article; zbMATH DE number 1794211
- Learning with reproducing kernel Banach spaces
- Regularized learning in Banach spaces as an optimization problem: representer theorems
- Quasi-Banach Spaces of Random Variables and Modeling of Stochastic Processes
- Random Banach spaces: The limitations of the method
emulatorsupervised learningmodel reductionsurrogate modelsolution maphigh-dimensional approximationdata-driven computingparametric PDErandom feature
Neural nets and related approaches to inference from stochastic processes (62M45) PDEs with randomness, stochastic partial differential equations (35R60) Algorithms for approximation of functions (65D15) Numerical approximation of high-dimensional functions; sparse grids (65D40)
Cites Work
- DGM: a deep learning algorithm for solving partial differential equations
- Gaussian processes for machine learning.
- VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM
- Universal approximation bounds for superpositions of a sigmoidal function
- On Learning Vector-Valued Functions
- Theory of Reproducing Kernels
- Spatial variation. 2nd ed
- Title not available (Why is that?)
- On the mathematical foundations of learning
- On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions
- Algorithms for Numerical Analysis in High Dimensions
- Elliptic partial differential equations of second order
- An `empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations
- Optimal rates for the regularized least-squares algorithm
- Optimization with PDE Constraints
- Title not available (Why is that?)
- Fourth-Order Time-Stepping for Stiff PDEs
- Blow up and regularity for fractal Burgers equation
- A least-squares approximation of partial differential equations with high-dimensional random inputs
- MCMC methods for functions: modifying old algorithms to make them faster
- Adaptive finite element methods for elliptic equations with non-smooth coefficients
- Bayesian learning for neural networks
- Title not available (Why is that?)
- Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs
- Approximation of high-dimensional parametric PDEs
- Numerical solution of the parametric diffusion equation by deep neural networks
- Model reduction and neural networks for parametric PDEs
- Scattered Data Approximation
- Operator-valued kernels for learning from functional response data
- Functional multi-layer perceptron: A nonlinear tool for functional data analysis
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients
- Non-intrusive reduced order modeling of nonlinear problems using neural networks
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- A mean-field optimal control formulation of deep learning
- Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
- Variational training of neural network approximations of solution maps for physical models
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ
- Optimal weighted least-squares methods
- Data driven approximation of parametrized PDEs by reduced basis and neural networks
- Deep neural networks motivated by partial differential equations
- Stable architectures for deep neural networks
- A proposal on machine learning via dynamical systems
- Model Reduction and Approximation
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Machine learning from a continuous viewpoint. I
- Solving electrical impedance tomography with deep learning
- Reproducing Kernel Hilbert Spaces for Parametric Partial Differential Equations
- ConvPDE-UQ: convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains
- Hierarchical Bayesian level set inversion
- A physics-informed operator regression framework for extracting data-driven continuum models
- Data-driven deep learning of partial differential equations in modal space
- Learning data-driven discretizations for partial differential equations
- Kernel-based reconstructions for parametric PDEs
- Data-driven forward discretizations for Bayesian inversion
- Meta-learning pseudo-differential operators with deep neural networks
Cited In (28)
- Reduced Operator Inference for Nonlinear Partial Differential Equations
- Two-Layer Neural Networks with Values in a Banach Space
- Do ideas have shape? Idea registration as the continuous limit of artificial neural networks
- Koopman neural operator as a mesh-free solver of non-linear partial differential equations
- Learning about structural errors in models of complex dynamical systems
- Local approximation of operators
- Learning homogenization for elliptic operators
- Operator learning using random features: a tool for scientific computing
- Sparse Recovery of Elliptic Solvers from Matrix-Vector Products
- SPADE4: sparsity and delay embedding based forecasting of epidemics
- A framework for machine learning of model error in dynamical systems
- Iterated Kalman methodology for inverse problems
- Energy-dissipative evolutionary deep operator neural networks
- Fast macroscopic forcing method
- Optimal Dirichlet boundary control by Fourier neural operators applied to nonlinear optics
- Learning phase field mean curvature flows with neural networks
- Derivative-informed neural operator: an efficient framework for high-dimensional parametric derivative learning
- The Random Feature Model for Input-Output Maps between Banach Spaces
- RandONets: shallow networks with random projections for learning linear and nonlinear operators
- MIONet: Learning Multiple-Input Operators via Tensor Product
- Learning high-dimensional parametric maps via reduced basis adaptive residual networks
- Variational regularization in inverse problems and machine learning
- Data-driven forward and inverse problems for chaotic and hyperchaotic dynamic systems based on two machine learning architectures
- An enhanced V-cycle MgNet model for operator learning in numerical partial differential equations
- Convergence Rates for Learning Linear Operators from Noisy Data
- Large-scale Bayesian optimal experimental design with derivative-informed projected neural network
- Transferable neural networks for partial differential equations
- Multi-scale time-stepping of partial differential equations with transformers
Uses Software
This page was built for publication: The Random Feature Model for Input-Output Maps between Banach Spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3382802)