Two-Layer Neural Networks with Values in a Banach Space
From MaRDI portal
Publication:5055293
curse of dimensionalityBregman distanceReLUbarron spacevariation norm spacevector-valued neural networks
Artificial neural networks and deep learning (68T07) Computational learning theory (68Q32) Abstract approximation theory (approximation in normed linear spaces and other abstract spaces) (41A65) Spaces of vector- and operator-valued functions (46E40) Numerical solution to inverse problems in abstract spaces (65J22)
Recommendations
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- Approximation spaces of deep neural networks
- Understanding neural networks with reproducing kernel Banach spaces
- Representation formulas and pointwise properties for Barron functions
- Multiple general sigmoids based Banach space valued neural network multivariate approximation
Cites work
- scientific article; zbMATH DE number 1716480 (Why is no real title available?)
- scientific article; zbMATH DE number 3137662 (Why is no real title available?)
- scientific article; zbMATH DE number 3576139 (Why is no real title available?)
- scientific article; zbMATH DE number 1215245 (Why is no real title available?)
- A counterexample to the approximation problem in Banach spaces
- A distribution-free theory of nonparametric regression
- A mean field view of the landscape of two-layer neural networks
- A theoretical analysis of deep neural networks and parametric PDEs
- A unifying representer theorem for inverse problems and machine learning
- Approximation and learning by greedy algorithms
- Approximation by superpositions of a sigmoidal function
- Banach lattices
- Banach space representer theorems for neural networks and ridge splines
- Bias reduction in variational regularization
- Bounds on rates of variable-basis and neural-network approximation
- Breaking the curse of dimensionality with convex neural networks
- Comparison of worst case errors in linear and neural network approximation
- Convergence rates of convex variational regularization
- Convex regularization in statistical inverse learning problems
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Exact support recovery for sparse spikes deconvolution
- Inverse problems in spaces of measures
- Kernels for vector-valued functions: a review
- Learning from examples as an inverse problem
- Lipschitz algebras
- Mean field analysis of neural networks: a law of large numbers
- Model reduction and neural networks for parametric PDEs
- Modern regularization methods for inverse problems
- Multilayer feedforward networks are universal approximators
- Neural network approximation
- On Learning Vector-Valued Functions
- On debiasing restoration algorithms: applications to total-variation and nonlocal-means
- On representer theorems and convex regularization
- On the regularizing property of stochastic gradient descent
- Optimal rates for regularization of statistical inverse learning problems
- Relative weak compactness of solid hulls in Banach lattices
- Representation formulas and pointwise properties for Barron functions
- Solving inverse problems using data-driven models
- Some applications of Rademacher sequences in Banach lattices
- Sparsity of solutions for variational inverse problems with finite-dimensional data
- The Barron space and the flow-induced function spaces for neural network models
- The Random Feature Model for Input-Output Maps between Banach Spaces
- The implicit bias of gradient descent on separable data
- Trainability and Accuracy of Artificial Neural Networks: An Interacting Particle System Approach
- Training neural networks with noisy data as an ill-posed problem
- Universal approximation bounds for superpositions of a sigmoidal function
- Variational methods in imaging
- Variational regularisation for inverse problems with imperfect forward operators and general noise models
- Vector-valued reproducing kernel Banach spaces with applications to multi-task learning
Cited in
(10)- Weighted variation spaces and approximation by shallow ReLU networks
- Linearized two-layers neural networks in high dimension
- Two-layer neural networks with values in a Banach space
- Operator learning using random features: a tool for scientific computing
- A Riemannian mean field formulation for two-layer neural networks with batch normalization
- Convergence Rates for Learning Linear Operators from Noisy Data
- Neural-network-based regularization methods for inverse problems in imaging
- Richards's curve induced Banach space valued multivariate neural network approximation
- From kernel methods to neural networks: a unifying variational formulation
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
This page was built for publication: Two-Layer Neural Networks with Values in a Banach Space
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5055293)