Two-Layer Neural Networks with Values in a Banach Space
From MaRDI portal
Publication:5055293
DOI10.1137/21M1458144MaRDI QIDQ5055293
Publication date: 13 December 2022
Published in: SIAM Journal on Mathematical Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.02095
curse of dimensionalityBregman distanceReLUbarron spacevariation norm spacevector-valued neural networks
Computational learning theory (68Q32) Artificial neural networks and deep learning (68T07) Spaces of vector- and operator-valued functions (46E40) Abstract approximation theory (approximation in normed linear spaces and other abstract spaces) (41A65) Numerical solution to inverse problems in abstract spaces (65J22)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimal rates for regularization of statistical inverse learning problems
- Exact support recovery for sparse spikes deconvolution
- Variational methods in imaging
- Banach lattices
- Some applications of Rademacher sequences in Banach lattices
- Relative weak compactness of solid hulls in Banach lattices
- Bias reduction in variational regularization
- Multilayer feedforward networks are universal approximators
- A distribution-free theory of nonparametric regression
- Vector-valued reproducing kernel Banach spaces with applications to multi-task learning
- Model reduction and neural networks for parametric PDEs
- Representation formulas and pointwise properties for Barron functions
- A theoretical analysis of deep neural networks and parametric PDEs
- The Barron space and the flow-induced function spaces for neural network models
- A unifying representer theorem for inverse problems and machine learning
- Sparsity of solutions for variational inverse problems with finite-dimensional data
- Approximation and learning by greedy algorithms
- A counterexample to the approximation problem in Banach spaces
- Kernels for Vector-Valued Functions: A Review
- On Debiasing Restoration Algorithms: Applications to Total-Variation and Nonlocal-Means
- The Random Feature Model for Input-Output Maps between Banach Spaces
- Universal approximation bounds for superpositions of a sigmoidal function
- Bounds on rates of variable-basis and neural-network approximation
- Comparison of worst case errors in linear and neural network approximation
- Lipschitz Algebras
- On the regularizing property of stochastic gradient descent
- Convergence rates of convex variational regularization
- Inverse problems in spaces of measures
- A mean field view of the landscape of two-layer neural networks
- Trainability and Accuracy of Artificial Neural Networks: An Interacting Particle System Approach
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Variational regularisation for inverse problems with imperfect forward operators and general noise models
- Mean Field Analysis of Neural Networks: A Law of Large Numbers
- Modern regularization methods for inverse problems
- Solving inverse problems using data-driven models
- On Representer Theorems and Convex Regularization
- Breaking the Curse of Dimensionality with Convex Neural Networks
- On Learning Vector-Valued Functions
- Neural network approximation
- Approximation by superpositions of a sigmoidal function
- Training neural networks with noisy data as an ill-posed problem
- Convex regularization in statistical inverse learning problems