Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels (Q2226529): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Breaking the Curse of Dimensionality with Convex Neural Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Universal approximation bounds for superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Functional analysis, Sobolev spaces and partial differential equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Applied functional analysis. Functional analysis, Sobolev spaces and elliptic differential equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: A priori estimates of the population risk for two-layer neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the rate of convergence in Wasserstein distance of the empirical measure / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5534403 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Understanding Machine Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Transport / rank
 
Normal rank

Latest revision as of 12:38, 24 July 2024

scientific article
Language Label Description Also known as
English
Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels
scientific article

    Statements

    Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels (English)
    0 references
    0 references
    0 references
    8 February 2021
    0 references
    curse of dimensionality
    0 references
    two-layer network
    0 references
    multi-layer network
    0 references
    population risk
    0 references
    Barron space
    0 references
    reproducing kernel Hilbert space
    0 references
    random feature model
    0 references
    neural tangent kernel
    0 references
    Kolmogorov width
    0 references
    approximation theory
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references