Conditions for linear convergence of the gradient method for non-convex optimization
From MaRDI portal
Publication:6097482
DOI10.1007/s11590-023-01981-2zbMath1519.90180arXiv2204.00647OpenAlexW4321769656MaRDI QIDQ6097482
Hadi Abbaszadehpeivasti, Etienne de Klerk, Moslem Zamani
Publication date: 5 June 2023
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2204.00647
semidefinite programminggradient methodperformance estimation problemPolyak-Łojasiewicz inequalityweakly convex optimization
Semidefinite programming (90C22) Nonconvex programming, global optimization (90C26) Methods of reduced gradient type (90C52)
Cites Work
- Unnamed Item
- Gradient methods for minimizing composite functions
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Lectures on convex optimization
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Lower bounds for finding stationary points I
- Performance of first-order methods for smooth convex minimization: a novel approach
- Linear convergence of first order methods for non-strongly convex optimization
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation