Linear convergence of first order methods for non-strongly convex optimization
From MaRDI portal
Publication:2414900
DOI10.1007/S10107-018-1232-1zbMath1412.90111arXiv1504.06298OpenAlexW2963830980WikidataQ105946458 ScholiaQ105946458MaRDI QIDQ2414900
François Glineur, Yu. E. Nesterov, Ion Necoara
Publication date: 17 May 2019
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1504.06298
Numerical mathematical programming methods (65K05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06)
Related Items (89)
Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence ⋮ Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition ⋮ Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis ⋮ Exponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphs ⋮ On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent ⋮ Stochastic block projection algorithms with extrapolation for convex feasibility problems ⋮ Exact gradient methods with memory ⋮ Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry ⋮ Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs ⋮ Unnamed Item ⋮ Active-Set Identification with Complexity Guarantees of an Almost Cyclic 2-Coordinate Descent Method with Armijo Line Search ⋮ Exact worst-case convergence rates of the proximal gradient method for composite convex minimization ⋮ Computation of the maximal invariant set of discrete-time linear systems subject to a class of non-convex constraints ⋮ An Iterative Reduction FISTA Algorithm for Large-Scale LASSO ⋮ New results on multi-dimensional linear discriminant analysis ⋮ Quadratic growth conditions and uniqueness of optimal solution to Lasso ⋮ Convergence Rates of the Heavy Ball Method for Quasi-strongly Convex Optimization ⋮ Linear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexity ⋮ Inexact successive quadratic approximation for regularized optimization ⋮ An update-and-stabilize framework for the minimum-norm-point problem ⋮ A speed restart scheme for a dynamics with Hessian-driven damping ⋮ Conditions for linear convergence of the gradient method for non-convex optimization ⋮ Linearly-convergent FISTA variant for composite optimization with duality ⋮ A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization ⋮ FISTA is an automatic geometrically optimized algorithm for strongly convex functions ⋮ Unnamed Item ⋮ Thresholding gradient methods in Hilbert spaces: support identification and linear convergence ⋮ A linearly convergent doubly stochastic Gauss-Seidel algorithm for solving linear equations and a certain class of over-parameterized optimization problems ⋮ Efficiency of higher-order algorithms for minimizing composite functions ⋮ Convergence rates of the heavy-ball method under the Łojasiewicz property ⋮ A globally convergent proximal Newton-type method in nonsmooth convex optimization ⋮ Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry ⋮ A strict complementarity approach to error bound and sensitivity of solution of conic programs ⋮ First-order methods for convex optimization ⋮ Local linear convergence of proximal coordinate descent algorithm ⋮ Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions ⋮ General Hölder smooth convergence rates follow from specialized rates assuming growth bounds ⋮ Unnamed Item ⋮ Proximal alternating penalty algorithms for nonsmooth constrained convex optimization ⋮ Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods ⋮ The exact worst-case convergence rate of the alternating direction method of multipliers ⋮ Linear convergence rates for variants of the alternating direction method of multipliers in smooth cases ⋮ New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization ⋮ The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth ⋮ New characterizations of Hoffman constants for systems of linear constraints ⋮ Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions ⋮ Coordinate descent methods beyond smoothness and separability ⋮ Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM ⋮ Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity ⋮ Mini-batch stochastic subgradient for functional constrained optimization ⋮ On the linear convergence of forward-backward splitting method. I: Convergence analysis ⋮ General convergence analysis of stochastic first-order methods for composite optimization ⋮ Acceleration and restart for the randomized Bregman-Kaczmarz method ⋮ Variance reduced moving balls approximation method for smooth constrained minimization problems ⋮ The condition number of a function relative to a set ⋮ Subgradient regularized multivariate convex regression at scale ⋮ MGProx: a nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization ⋮ New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization ⋮ Consensus-based optimization methods converge globally ⋮ Interpolation conditions for linear operators and applications to performance estimation problems ⋮ Distributed accelerated gradient methods with restart under quadratic growth condition ⋮ Parameter-free FISTA by adaptive restart and backtracking ⋮ Lipschitz continuity of the metric projection operator and convergence of gradient methods ⋮ Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems ⋮ Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method ⋮ Perseus: a simple and optimal high-order method for variational inequalities ⋮ New analysis of linear convergence of gradient-type methods via unifying error bound conditions ⋮ Restarting the accelerated coordinate descent method with a rough strong convexity estimate ⋮ Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods ⋮ Provable accelerated gradient method for nonconvex low rank optimization ⋮ An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization ⋮ Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis ⋮ Convergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz Continuity ⋮ Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems ⋮ Accelerate stochastic subgradient method by leveraging local growth condition ⋮ Accelerating Stochastic Composition Optimization ⋮ Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions ⋮ Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates ⋮ Convergence results of a nested decentralized gradient method for non-strongly convex problems ⋮ Distributed block-diagonal approximation methods for regularized empirical risk minimization ⋮ Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\) ⋮ Unnamed Item ⋮ Inertial proximal incremental aggregated gradient method with linear convergence guarantees ⋮ From differential equation solvers to accelerated first-order methods for convex optimization ⋮ Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition ⋮ Random minibatch subgradient algorithms for convex problems with functional constraints ⋮ A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations ⋮ Linear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex Problems ⋮ On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Conic optimization via operator splitting and homogeneous self-dual embedding
- Gradient methods for minimizing composite functions
- Weak sharp minima revisited. III: Error bounds for differentiable convex inclusions
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- A unified approach to error bounds for structured convex optimization problems
- Coordinate descent algorithms
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Adaptive restart for accelerated gradient schemes
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- Randomized Methods for Linear Constraints: Convergence Rates and Conditioning
- Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds
- Robust Stochastic Approximation Approach to Stochastic Programming
- Error bounds for solutions of linear equations and inequalities
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
This page was built for publication: Linear convergence of first order methods for non-strongly convex optimization