Conditions for linear convergence of the gradient method for non-convex optimization

From MaRDI portal
Publication:6097482

DOI10.1007/S11590-023-01981-2zbMATH Open1519.90180arXiv2204.00647OpenAlexW4321769656MaRDI QIDQ6097482FDOQ6097482


Authors: Hadi Abbaszadehpeivasti, E. de Klerk, Moslem Zamani Edit this on Wikidata


Publication date: 5 June 2023

Published in: Optimization Letters (Search for Journal in Brave)

Abstract: In this paper, we derive a new linear convergence rate for the gradient method with fixed step lengths for non-convex smooth optimization problems satisfying the Polyak-Lojasiewicz (PL) inequality. We establish that the PL inequality is a necessary and sufficient condition for linear convergence to the optimal value for this class of problems. We list some related classes of functions for which the gradient method may enjoy linear convergence rate. Moreover, we investigate their relationship with the PL inequality.


Full work available at URL: https://arxiv.org/abs/2204.00647




Recommendations




Cites Work


Cited In (11)





This page was built for publication: Conditions for linear convergence of the gradient method for non-convex optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097482)