Conditions for linear convergence of the gradient method for non-convex optimization
From MaRDI portal
Publication:6097482
Abstract: In this paper, we derive a new linear convergence rate for the gradient method with fixed step lengths for non-convex smooth optimization problems satisfying the Polyak-Lojasiewicz (PL) inequality. We establish that the PL inequality is a necessary and sufficient condition for linear convergence to the optimal value for this class of problems. We list some related classes of functions for which the gradient method may enjoy linear convergence rate. Moreover, we investigate their relationship with the PL inequality.
Recommendations
- The gradient projection algorithm for smooth sets and functions in nonconvex case
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Linear convergence of first order methods for non-strongly convex optimization
Cites work
- scientific article; zbMATH DE number 3313108 (Why is no real title available?)
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Gradient methods for minimizing composite functions
- Lectures on convex optimization
- Linear convergence of first order methods for non-strongly convex optimization
- Lower bounds for finding stationary points I
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Performance of first-order methods for smooth convex minimization: a novel approach
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Stochastic model-based minimization of weakly convex functions
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
Cited in
(11)- Non-ergodic linear convergence property of the delayed gradient descent under the strongly convexity and the Polyak-Łojasiewicz condition
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- On the rate of convergence of the difference-of-convex algorithm (DCA)
- The exact worst-case convergence rate of the alternating direction method of multipliers
- Interpolation conditions for linear operators and applications to performance estimation problems
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity
- Linear convergence of first order methods for non-strongly convex optimization
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- scientific article; zbMATH DE number 4057300 (Why is no real title available?)
This page was built for publication: Conditions for linear convergence of the gradient method for non-convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097482)