Best k-layer neural network approximations
DOI10.1007/S00365-021-09545-2zbMATH Open1501.41005arXiv1907.01507OpenAlexW3115973547WikidataQ114229768 ScholiaQ114229768MaRDI QIDQ2117342FDOQ2117342
Yang Qi, Mateusz Michalek, Lek-Heng Lim
Publication date: 21 March 2022
Published in: Constructive Approximation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1907.01507
Recommendations
- Critical points for least-squares problems involving certain analytic functions, with applications to sigmoidal nets
- Hardness results for neural network approximation problems
- Over-parametrized deep neural networks minimizing the empirical risk do not generalize well
- Error bounds for approximations with deep ReLU networks
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20) Approximation by other special function classes (41A30) Best approximation, Chebyshev systems (41A50)
Cites Work
- Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
- Condition
- Multilayer feedforward networks are universal approximators
- Title not available (Why is that?)
- Title not available (Why is that?)
- Approximation by superpositions of a sigmoidal function
- Networks and the best approximation property
- Training neural networks with noisy data as an ill-posed problem
- Machine learning: from theory to applications. Cooperative research at Siemens and MIT
- Complex best \(r\)-term approximations almost always exist in finite dimensions
- Topological properties of the set of functions generated by neural networks of fixed size
Cited In (1)
Uses Software
This page was built for publication: Best \(k\)-layer neural network approximations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117342)