Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels
DOI10.1007/s40687-020-00233-4OpenAlexW3120785056MaRDI QIDQ2226529
Stephan Wojtowytsch, E. Weinan
Publication date: 8 February 2021
Published in: Research in the Mathematical Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2005.10807
approximation theorycurse of dimensionalityreproducing kernel Hilbert spaceKolmogorov widthmulti-layer networktwo-layer networkneural tangent kernelBarron spacerandom feature modelpopulation risk
Artificial neural networks and deep learning (68T07) Abstract approximation theory (approximation in normed linear spaces and other abstract spaces) (41A65) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Banach spaces of continuous, differentiable or analytic functions (46E15) Approximation by other special function classes (41A30)
Related Items (9)
Cites Work
- Unnamed Item
- On the rate of convergence in Wasserstein distance of the empirical measure
- Applied functional analysis. Functional analysis, Sobolev spaces and elliptic differential equations
- Functional analysis, Sobolev spaces and partial differential equations
- A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics
- A priori estimates of the population risk for two-layer neural networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Understanding Machine Learning
- Optimal Transport
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels