On the Iteration Complexity of Smoothed Proximal ALM for Nonconvex Optimization Problem with Convex Constraints

From MaRDI portal
Publication:6404871

arXiv2207.06304MaRDI QIDQ6404871FDOQ6404871


Authors: Jiawei Zhang, Wenqiang Pu, Zhi-Quan Luo Edit this on Wikidata


Publication date: 13 July 2022

Abstract: It is well-known that the lower bound of iteration complexity for solving nonconvex unconstrained optimization problems is Omega(1/epsilon2), which can be achieved by standard gradient descent algorithm when the objective function is smooth. This lower bound still holds for nonconvex constrained problems, while it is still unknown whether a first-order method can achieve this lower bound. In this paper, we show that a simple single-loop first-order algorithm called smoothed proximal augmented Lagrangian method (ALM) can achieve such iteration complexity lower bound. The key technical contribution is a strong local error bound for a general convex constrained problem, which is of independent interest.













This page was built for publication: On the Iteration Complexity of Smoothed Proximal ALM for Nonconvex Optimization Problem with Convex Constraints

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6404871)