Two-Layer Neural Networks with Values in a Banach Space
DOI10.1137/21M1458144MaRDI QIDQ5055293FDOQ5055293
Authors: Yury Korolev
Publication date: 13 December 2022
Published in: SIAM Journal on Mathematical Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.02095
Recommendations
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- Approximation spaces of deep neural networks
- Understanding neural networks with reproducing kernel Banach spaces
- Representation formulas and pointwise properties for Barron functions
- Multiple general sigmoids based Banach space valued neural network multivariate approximation
curse of dimensionalityBregman distanceReLUbarron spacevariation norm spacevector-valued neural networks
Artificial neural networks and deep learning (68T07) Computational learning theory (68Q32) Abstract approximation theory (approximation in normed linear spaces and other abstract spaces) (41A65) Spaces of vector- and operator-valued functions (46E40) Numerical solution to inverse problems in abstract spaces (65J22)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- Comparison of worst case errors in linear and neural network approximation
- On Learning Vector-Valued Functions
- Kernels for vector-valued functions: a review
- Title not available (Why is that?)
- A distribution-free theory of nonparametric regression
- Approximation and learning by greedy algorithms
- Title not available (Why is that?)
- Convergence rates of convex variational regularization
- Variational methods in imaging
- Banach lattices
- Title not available (Why is that?)
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Exact support recovery for sparse spikes deconvolution
- A counterexample to the approximation problem in Banach spaces
- Learning from examples as an inverse problem
- Convex regularization in statistical inverse learning problems
- Some applications of Rademacher sequences in Banach lattices
- Relative weak compactness of solid hulls in Banach lattices
- Model reduction and neural networks for parametric PDEs
- Bounds on rates of variable-basis and neural-network approximation
- Training neural networks with noisy data as an ill-posed problem
- Optimal rates for regularization of statistical inverse learning problems
- Inverse problems in spaces of measures
- On representer theorems and convex regularization
- Sparsity of solutions for variational inverse problems with finite-dimensional data
- Lipschitz algebras
- Variational regularisation for inverse problems with imperfect forward operators and general noise models
- Solving inverse problems using data-driven models
- A mean field view of the landscape of two-layer neural networks
- Breaking the curse of dimensionality with convex neural networks
- Neural network approximation
- Vector-valued reproducing kernel Banach spaces with applications to multi-task learning
- Bias reduction in variational regularization
- On debiasing restoration algorithms: applications to total-variation and nonlocal-means
- Modern regularization methods for inverse problems
- The implicit bias of gradient descent on separable data
- Mean field analysis of neural networks: a law of large numbers
- On the regularizing property of stochastic gradient descent
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- A theoretical analysis of deep neural networks and parametric PDEs
- Banach space representer theorems for neural networks and ridge splines
- A unifying representer theorem for inverse problems and machine learning
- The Random Feature Model for Input-Output Maps between Banach Spaces
- Trainability and Accuracy of Artificial Neural Networks: An Interacting Particle System Approach
- Representation formulas and pointwise properties for Barron functions
- The Barron space and the flow-induced function spaces for neural network models
Cited In (9)
- Operator learning using random features: a tool for scientific computing
- A Riemannian mean field formulation for two-layer neural networks with batch normalization
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- Weighted variation spaces and approximation by shallow ReLU networks
- Neural-network-based regularization methods for inverse problems in imaging
- From kernel methods to neural networks: a unifying variational formulation
- Convergence Rates for Learning Linear Operators from Noisy Data
- Richards's curve induced Banach space valued multivariate neural network approximation
- Linearized two-layers neural networks in high dimension
Uses Software
This page was built for publication: Two-Layer Neural Networks with Values in a Banach Space
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5055293)