Convergence and convergence rate of stochastic gradient search in the case of multiple and non-isolated extrema

From MaRDI portal
Publication:2018557

DOI10.1016/J.SPA.2014.11.001zbMATH Open1357.62274arXiv0907.1020OpenAlexW2084687242MaRDI QIDQ2018557FDOQ2018557


Authors: Vladislav B. Tadić Edit this on Wikidata


Publication date: 24 March 2015

Published in: Stochastic Processes and their Applications (Search for Journal in Brave)

Abstract: The asymptotic behavior of stochastic gradient algorithms is studied. Relying on results from differential geometry (Lojasiewicz gradient inequality), the single limit-point convergence of the algorithm iterates is demonstrated and relatively tight bounds on the convergence rate are derived. In sharp contrast to the existing asymptotic results, the new results presented here allow the objective function to have multiple and non-isolated minima. The new results also offer new insights into the asymptotic properties of several classes of recursive algorithms which are routinely used in engineering, statistics, machine learning and operations research.


Full work available at URL: https://arxiv.org/abs/0907.1020




Recommendations




Cites Work


Cited In (12)

Uses Software





This page was built for publication: Convergence and convergence rate of stochastic gradient search in the case of multiple and non-isolated extrema

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2018557)