Linear convergence of first order methods for non-strongly convex optimization
From MaRDI portal
Abstract: The standard assumption for proving linear convergence of first order methods for smooth convex optimization is the strong convexity of the objective function, an assumption which does not hold for many practical applications. In this paper, we derive linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. In particular, in the case of smooth constrained convex optimization, we provide several relaxations of the strong convexity conditions and prove that they are sufficient for getting linear convergence for several first order methods such as projected gradient, fast gradient and feasible descent methods. We also provide examples of functional classes that satisfy our proposed relaxations of strong convexity conditions. Finally, we show that the proposed relaxed strong convexity conditions cover important applications ranging from solving linear systems, Linear Programming, and dual formulations of linearly constrained convex problems.
Recommendations
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Conditions for linear convergence of the gradient method for non-convex optimization
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems
- First-order methods for convex optimization
- Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems
Cites work
- scientific article; zbMATH DE number 6378119 (Why is no real title available?)
- scientific article; zbMATH DE number 1328979 (Why is no real title available?)
- A unified approach to error bounds for structured convex optimization problems
- Adaptive restart for accelerated gradient schemes
- An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
- Conic optimization via operator splitting and homogeneous self-dual embedding
- Convex optimization: algorithms and complexity
- Coordinate descent algorithms
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds for solutions of linear equations and inequalities
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Gradient methods for minimizing composite functions
- Introductory lectures on convex optimization. A basic course.
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- Parallel random coordinate descent method for composite minimization: convergence analysis and error bounds
- Randomized methods for linear constraints: convergence rates and conditioning
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Weak sharp minima revisited. III: Error bounds for differentiable convex inclusions
Cited in
(93)- General convergence analysis of stochastic first-order methods for composite optimization
- Active-set identification with complexity guarantees of an almost cyclic 2-coordinate descent method with Armijo line search
- Linear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex Problems
- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions
- New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization
- Mini-batch stochastic subgradient for functional constrained optimization
- Conditions for linear convergence of the gradient method for non-convex optimization
- Accelerating Stochastic Composition Optimization
- Inexact successive quadratic approximation for regularized optimization
- scientific article; zbMATH DE number 7307490 (Why is no real title available?)
- On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
- Subgradient regularized multivariate convex regression at scale
- First-Order Methods for Nonconvex Quadratic Minimization
- First-order methods for convex optimization
- Linear convergence rates for variants of the alternating direction method of multipliers in smooth cases
- Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence
- Efficiency of higher-order algorithms for minimizing composite functions
- Linear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexity
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- MGProx: a nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Randomized projection methods for convex feasibility: conditioning and convergence rates
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
- New characterizations of Hoffman constants for systems of linear constraints
- The exact worst-case convergence rate of the alternating direction method of multipliers
- A strict complementarity approach to error bound and sensitivity of solution of conic programs
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
- Exponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphs
- Consensus-based optimization methods converge globally
- Interpolation conditions for linear operators and applications to performance estimation problems
- Distributed accelerated gradient methods with restart under quadratic growth condition
- scientific article; zbMATH DE number 7370578 (Why is no real title available?)
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Provable accelerated gradient method for nonconvex low rank optimization
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Convergence rates of the heavy ball method for quasi-strongly convex optimization
- Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis
- Convergence rates for the stochastic gradient descent method for non-convex objective functions
- Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis
- Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry
- scientific article; zbMATH DE number 7811224 (Why is no real title available?)
- The condition number of a function relative to a set
- Acceleration and restart for the randomized Bregman-Kaczmarz method
- Local linear convergence of proximal coordinate descent algorithm
- First-order methods for problems with \(O(1)\) functional constraints can have almost the same convergence rate as for unconstrained problems
- Convergence rates of the heavy-ball method under the Łojasiewicz property
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
- A linearly convergent doubly stochastic Gauss-Seidel algorithm for solving linear equations and a certain class of over-parameterized optimization problems
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Parameter-free FISTA by adaptive restart and backtracking
- Computation of the maximal invariant set of discrete-time linear systems subject to a class of non-convex constraints
- Global convergence rate of proximal incremental aggregated gradient methods
- A globally convergent proximal Newton-type method in nonsmooth convex optimization
- Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method
- General Hölder smooth convergence rates follow from specialized rates assuming growth bounds
- Stochastic block projection algorithms with extrapolation for convex feasibility problems
- On the linear convergence of forward-backward splitting method. I: Convergence analysis
- New results on multi-dimensional linear discriminant analysis
- Quadratic growth conditions and uniqueness of optimal solution to Lasso
- Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM
- An update-and-stabilize framework for the minimum-norm-point problem
- A speed restart scheme for a dynamics with Hessian-driven damping
- Lipschitz continuity of the metric projection operator and convergence of gradient methods
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- An iterative reduction FISTA algorithm for large-scale LASSO
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- From differential equation solvers to accelerated first-order methods for convex optimization
- Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems
- Distributed block-diagonal approximation methods for regularized empirical risk minimization
- Linearly-convergent FISTA variant for composite optimization with duality
- Coordinate descent methods beyond smoothness and separability
- A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations
- A new Lagrangian-based first-order method for nonconvex constrained optimization
- Convergence results of a nested decentralized gradient method for non-strongly convex problems
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Perseus: a simple and optimal high-order method for variational inequalities
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees
- Variance reduced moving balls approximation method for smooth constrained minimization problems
- Thresholding gradient methods in Hilbert spaces: support identification and linear convergence
- Accelerate stochastic subgradient method by leveraging local growth condition
- FISTA is an automatic geometrically optimized algorithm for strongly convex functions
- Exact gradient methods with memory
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
This page was built for publication: Linear convergence of first order methods for non-strongly convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2414900)