Convergence analysis of gradient descent stochastic algorithms (Q1359455): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
RedirectionBot (talk | contribs)
Changed an Item
Property / reviewed by
 
Property / reviewed by: Yousri M. Abd-El-Fattah / rank
 
Normal rank

Revision as of 11:31, 22 February 2024

scientific article
Language Label Description Also known as
English
Convergence analysis of gradient descent stochastic algorithms
scientific article

    Statements

    Convergence analysis of gradient descent stochastic algorithms (English)
    0 references
    0 references
    7 October 1997
    0 references
    The paper provides convergence analysis results for a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm requires that the distance between two consecutive iteration points converge to zero as the iterate count goes to infinity. The paper gives two convergence results: one for the case where the expected-value function is continuously differentiable and the other when the function is nondifferentiable but the sample performance functions are convex. The proofs are based on a version of the uniform law of large numbers which is provable for many discrete event systems where infinitesimal perturbation analysis is known to be strongly consistent.
    0 references
    gradient descent
    0 references
    uniform law of large numbers
    0 references
    infinitesimal
    0 references
    discrete event systems
    0 references

    Identifiers