Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels (Q2226529)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels |
scientific article; zbMATH DE number 7307666
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels |
scientific article; zbMATH DE number 7307666 |
Statements
Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels (English)
0 references
8 February 2021
0 references
curse of dimensionality
0 references
two-layer network
0 references
multi-layer network
0 references
population risk
0 references
Barron space
0 references
reproducing kernel Hilbert space
0 references
random feature model
0 references
neural tangent kernel
0 references
Kolmogorov width
0 references
approximation theory
0 references
0 references
0.7620906233787537
0 references
0.7577672004699707
0 references
0.7514186501502991
0 references
0.7398576736450195
0 references
0.7293365001678467
0 references