Linear convergence of first order methods for non-strongly convex optimization

From MaRDI portal
Publication:2414900

DOI10.1007/s10107-018-1232-1zbMath1412.90111arXiv1504.06298OpenAlexW2963830980WikidataQ105946458 ScholiaQ105946458MaRDI QIDQ2414900

François Glineur, Yu. E. Nesterov, Ion Necoara

Publication date: 17 May 2019

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1504.06298



Related Items

Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence, Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition, Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis, Exponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphs, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, Stochastic block projection algorithms with extrapolation for convex feasibility problems, Exact gradient methods with memory, Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry, Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs, Unnamed Item, Active-Set Identification with Complexity Guarantees of an Almost Cyclic 2-Coordinate Descent Method with Armijo Line Search, Exact worst-case convergence rates of the proximal gradient method for composite convex minimization, Computation of the maximal invariant set of discrete-time linear systems subject to a class of non-convex constraints, An Iterative Reduction FISTA Algorithm for Large-Scale LASSO, New results on multi-dimensional linear discriminant analysis, Quadratic growth conditions and uniqueness of optimal solution to Lasso, Convergence Rates of the Heavy Ball Method for Quasi-strongly Convex Optimization, Linear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexity, Inexact successive quadratic approximation for regularized optimization, An update-and-stabilize framework for the minimum-norm-point problem, A speed restart scheme for a dynamics with Hessian-driven damping, Conditions for linear convergence of the gradient method for non-convex optimization, Linearly-convergent FISTA variant for composite optimization with duality, A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization, FISTA is an automatic geometrically optimized algorithm for strongly convex functions, Unnamed Item, Thresholding gradient methods in Hilbert spaces: support identification and linear convergence, A linearly convergent doubly stochastic Gauss-Seidel algorithm for solving linear equations and a certain class of over-parameterized optimization problems, Efficiency of higher-order algorithms for minimizing composite functions, Convergence rates of the heavy-ball method under the Łojasiewicz property, A globally convergent proximal Newton-type method in nonsmooth convex optimization, Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry, A strict complementarity approach to error bound and sensitivity of solution of conic programs, First-order methods for convex optimization, Local linear convergence of proximal coordinate descent algorithm, Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions, General Hölder smooth convergence rates follow from specialized rates assuming growth bounds, Unnamed Item, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Linear convergence rates for variants of the alternating direction method of multipliers in smooth cases, New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization, The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth, New characterizations of Hoffman constants for systems of linear constraints, Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions, Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM, Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity, On the linear convergence of forward-backward splitting method. I: Convergence analysis, General convergence analysis of stochastic first-order methods for composite optimization, The condition number of a function relative to a set, New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization, Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods, Provable accelerated gradient method for nonconvex low rank optimization, An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization, Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis, Convergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz Continuity, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Accelerate stochastic subgradient method by leveraging local growth condition, Accelerating Stochastic Composition Optimization, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates, Convergence results of a nested decentralized gradient method for non-strongly convex problems, Distributed block-diagonal approximation methods for regularized empirical risk minimization, Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\), Unnamed Item, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, From differential equation solvers to accelerated first-order methods for convex optimization, Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition, Random minibatch subgradient algorithms for convex problems with functional constraints, A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations, Linear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex Problems, On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces


Uses Software


Cites Work