Accelerated gradient methods for nonconvex nonlinear and stochastic programming

From MaRDI portal
Publication:263185

DOI10.1007/S10107-015-0871-8zbMATH Open1335.62121arXiv1310.3787OpenAlexW1987083649MaRDI QIDQ263185FDOQ263185

Saeed Ghadimi, Guanghui Lan

Publication date: 4 April 2016

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: In this paper, we generalize the well-known Nesterov's accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method. We then consider an important class of composite optimization problems and show that the AG method can solve them uniformly, i.e., by using the same aggressive stepsize policy as in the convex case, even if the problem turns out to be nonconvex. We demonstrate that the AG method exhibits an optimal rate of convergence if the composite problem is convex, and improves the best known rate of convergence if the problem is nonconvex. Based on the AG method, we also present new nonconvex stochastic approximation methods and show that they can improve a few existing rates of convergence for nonconvex stochastic optimization. To the best of our knowledge, this is the first time that the convergence of the AG method has been established for solving nonconvex nonlinear programming in the literature.


Full work available at URL: https://arxiv.org/abs/1310.3787




Recommendations




Cites Work


Cited In (only showing first 100 items - show all)





This page was built for publication: Accelerated gradient methods for nonconvex nonlinear and stochastic programming

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q263185)