Conditions for linear convergence of the gradient method for non-convex optimization
DOI10.1007/S11590-023-01981-2zbMATH Open1519.90180arXiv2204.00647OpenAlexW4321769656MaRDI QIDQ6097482FDOQ6097482
Authors: Hadi Abbaszadehpeivasti, E. de Klerk, Moslem Zamani
Publication date: 5 June 2023
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2204.00647
Recommendations
- The gradient projection algorithm for smooth sets and functions in nonconvex case
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Linear convergence of first order methods for non-strongly convex optimization
semidefinite programminggradient methodperformance estimation problemPolyak-Łojasiewicz inequalityweakly convex optimization
Nonconvex programming, global optimization (90C26) Semidefinite programming (90C22) Methods of reduced gradient type (90C52)
Cites Work
- Gradient methods for minimizing composite functions
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Performance of first-order methods for smooth convex minimization: a novel approach
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Linear convergence of first order methods for non-strongly convex optimization
- Title not available (Why is that?)
- Lectures on convex optimization
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Stochastic model-based minimization of weakly convex functions
- Lower bounds for finding stationary points I
- Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
Cited In (11)
- Non-ergodic linear convergence property of the delayed gradient descent under the strongly convexity and the Polyak-Łojasiewicz condition
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- On the rate of convergence of the difference-of-convex algorithm (DCA)
- The exact worst-case convergence rate of the alternating direction method of multipliers
- Interpolation conditions for linear operators and applications to performance estimation problems
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity
- Linear convergence of first order methods for non-strongly convex optimization
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- Title not available (Why is that?)
This page was built for publication: Conditions for linear convergence of the gradient method for non-convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097482)