Learning the mapping x _i=1d x_i^2: the cost of finding the needle in a haystack
DOI10.1007/S42967-020-00078-2zbMATH Open1476.68242arXiv2002.10561OpenAlexW3047755916MaRDI QIDQ2667355FDOQ2667355
Authors: Yanyan Li
Publication date: 24 November 2021
Published in: Communications on Applied Mathematics and Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2002.10561
Recommendations
- scientific article; zbMATH DE number 2087696
- The Barron space and the flow-induced function spaces for neural network models
- Just interpolate: kernel ``ridgeless regression can generalize
- Sparse deep neural networks using \(L_{1,\infty}\)-weight normalization
- An analysis of training and generalization errors in shallow and deep networks
Statistical aspects of big data and data science (62R07) Learning and adaptive systems in artificial intelligence (68T05) Numerical summation of series (65B10) Neural nets and related approaches to inference from stochastic processes (62M45)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- On early stopping in gradient descent learning
- Error bounds for approximations with deep ReLU networks
- Breaking the curse of dimensionality with convex neural networks
- Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities
- Spurious valleys in one-hidden-layer neural network optimization landscapes
Cited In (2)
Uses Software
This page was built for publication: Learning the mapping \(\mathbf{x}\mapsto \sum\limits_{i=1}^d x_i^2\): the cost of finding the needle in a haystack
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2667355)