A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions (Q2145074): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
Import241208061232 (talk | contribs)
Normalize DOI.
 
(4 intermediate revisions by 4 users not shown)
Property / DOI
 
Property / DOI: 10.1016/j.jco.2022.101646 / rank
Normal rank
 
Property / OpenAlex ID
 
Property / OpenAlex ID: W3132264265 / rank
 
Normal rank
Property / Wikidata QID
 
Property / Wikidata QID: Q113871711 / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 2102.09924 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Breaking the Curse of Dimensionality with Convex Neural Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Full error analysis for the training of deep neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Non-convergence of stochastic gradient descent in the training of deep neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4969246 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Strong error analysis for stochastic gradient descent optimization algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates / rank
 
Normal rank
Property / cites work
 
Property / cites work: Dying ReLU and Initialization: Theory and Numerical Examples / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3996430 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Gradient descent optimizes over-parameterized deep ReLU networks / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1016/J.JCO.2022.101646 / rank
 
Normal rank

Latest revision as of 05:51, 17 December 2024

scientific article
Language Label Description Also known as
English
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
scientific article

    Statements

    A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    17 June 2022
    0 references
    artificial neural networks
    0 references
    nonconvex optimization
    0 references
    nonsmooth optimization
    0 references
    gradient methods
    0 references
    machine learning
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references