Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

From MaRDI portal
Publication:5962719

DOI10.1007/S10107-014-0846-1zbMATH Open1332.90196arXiv1308.6594OpenAlexW2029463628MaRDI QIDQ5962719FDOQ5962719


Authors: Saeed Ghadimi, Guanghui Lan, Hongchao Zhang Edit this on Wikidata


Publication date: 23 February 2016

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samples are taken at each iteration depending on the total budget of stochastic samples allowed. The RSPG algorithm also employs a general distance function to allow taking advantage of the geometry of the feasible region. Complexity of this algorithm is established in a unified setting, which shows nearly optimal complexity of the algorithm for convex stochastic programming. A post-optimization phase is also proposed to significantly reduce the variance of the solutions returned by the algorithm. In addition, based on the RSPG algorithm, a stochastic gradient free algorithm, which only uses the stochastic zeroth-order information, has been also discussed. Some preliminary numerical results are also provided.


Full work available at URL: https://arxiv.org/abs/1308.6594




Recommendations




Cites Work


Cited In (only showing first 100 items - show all)





This page was built for publication: Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5962719)