Convergence of stochastic proximal gradient algorithm

From MaRDI portal
Publication:2019902

DOI10.1007/S00245-019-09617-7zbMATH Open1465.90101arXiv1403.5074OpenAlexW2980398138WikidataQ127020378 ScholiaQ127020378MaRDI QIDQ2019902FDOQ2019902

Silvia Villa, Băng Công Vũ, Lorenzo Rosasco

Publication date: 22 April 2021

Published in: Applied Mathematics and Optimization (Search for Journal in Brave)

Abstract: We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive O(1/n) non asymptotic bounds in expectation in the strongly convex case, as well as almost sure convergence results under weaker assumptions. Our approach allows to avoid averaging and weaken boundedness assumptions which are often considered in theoretical studies and might not be satisfied in practice.


Full work available at URL: https://arxiv.org/abs/1403.5074




Recommendations




Cites Work


Cited In (40)

Uses Software





This page was built for publication: Convergence of stochastic proximal gradient algorithm

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2019902)