Linear convergence of first order methods for non-strongly convex optimization

From MaRDI portal
Revision as of 20:35, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2414900

DOI10.1007/S10107-018-1232-1zbMath1412.90111arXiv1504.06298OpenAlexW2963830980WikidataQ105946458 ScholiaQ105946458MaRDI QIDQ2414900

François Glineur, Yu. E. Nesterov, Ion Necoara

Publication date: 17 May 2019

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1504.06298





Related Items (89)

Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergenceConvergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound conditionParallel random block-coordinate forward-backward algorithm: a unified convergence analysisExponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphsOn the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate AscentStochastic block projection algorithms with extrapolation for convex feasibility problemsExact gradient methods with memoryScaled relative graphs: nonexpansive operators via 2D Euclidean geometrySynthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIsUnnamed ItemActive-Set Identification with Complexity Guarantees of an Almost Cyclic 2-Coordinate Descent Method with Armijo Line SearchExact worst-case convergence rates of the proximal gradient method for composite convex minimizationComputation of the maximal invariant set of discrete-time linear systems subject to a class of non-convex constraintsAn Iterative Reduction FISTA Algorithm for Large-Scale LASSONew results on multi-dimensional linear discriminant analysisQuadratic growth conditions and uniqueness of optimal solution to LassoConvergence Rates of the Heavy Ball Method for Quasi-strongly Convex OptimizationLinear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexityInexact successive quadratic approximation for regularized optimizationAn update-and-stabilize framework for the minimum-norm-point problemA speed restart scheme for a dynamics with Hessian-driven dampingConditions for linear convergence of the gradient method for non-convex optimizationLinearly-convergent FISTA variant for composite optimization with dualityA Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex OptimizationFISTA is an automatic geometrically optimized algorithm for strongly convex functionsUnnamed ItemThresholding gradient methods in Hilbert spaces: support identification and linear convergenceA linearly convergent doubly stochastic Gauss-Seidel algorithm for solving linear equations and a certain class of over-parameterized optimization problemsEfficiency of higher-order algorithms for minimizing composite functionsConvergence rates of the heavy-ball method under the Łojasiewicz propertyA globally convergent proximal Newton-type method in nonsmooth convex optimizationConvergence of the forward-backward algorithm: beyond the worst-case with the help of geometryA strict complementarity approach to error bound and sensitivity of solution of conic programsFirst-order methods for convex optimizationLocal linear convergence of proximal coordinate descent algorithmGlobal complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptionsGeneral Hölder smooth convergence rates follow from specialized rates assuming growth boundsUnnamed ItemProximal alternating penalty algorithms for nonsmooth constrained convex optimizationGlobal Convergence Rate of Proximal Incremental Aggregated Gradient MethodsThe exact worst-case convergence rate of the alternating direction method of multipliersLinear convergence rates for variants of the alternating direction method of multipliers in smooth casesNew nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimizationThe restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growthNew characterizations of Hoffman constants for systems of linear constraintsConvergence rates of an inertial gradient descent algorithm under growth and flatness conditionsCoordinate descent methods beyond smoothness and separabilityError bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGMExponential convergence of distributed primal-dual convex optimization algorithm without strong convexityMini-batch stochastic subgradient for functional constrained optimizationOn the linear convergence of forward-backward splitting method. I: Convergence analysisGeneral convergence analysis of stochastic first-order methods for composite optimizationAcceleration and restart for the randomized Bregman-Kaczmarz methodVariance reduced moving balls approximation method for smooth constrained minimization problemsThe condition number of a function relative to a setSubgradient regularized multivariate convex regression at scaleMGProx: a nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimizationNew subspace minimization conjugate gradient methods based on regularization model for unconstrained optimizationConsensus-based optimization methods converge globallyInterpolation conditions for linear operators and applications to performance estimation problemsDistributed accelerated gradient methods with restart under quadratic growth conditionParameter-free FISTA by adaptive restart and backtrackingLipschitz continuity of the metric projection operator and convergence of gradient methodsConvergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problemsLevel-set subdifferential error bounds and linear convergence of Bregman proximal gradient methodPerseus: a simple and optimal high-order method for variational inequalitiesNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsRestarting the accelerated coordinate descent method with a rough strong convexity estimateError Bounds, Quadratic Growth, and Linear Convergence of Proximal MethodsProvable accelerated gradient method for nonconvex low rank optimizationAn inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimizationAccelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysisConvergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz ContinuityVariational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problemsAccelerate stochastic subgradient method by leveraging local growth conditionAccelerating Stochastic Composition OptimizationProximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth ConditionsRandomized Projection Methods for Convex Feasibility: Conditioning and Convergence RatesConvergence results of a nested decentralized gradient method for non-strongly convex problemsDistributed block-diagonal approximation methods for regularized empirical risk minimizationGeneralized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)Unnamed ItemInertial proximal incremental aggregated gradient method with linear convergence guaranteesFrom differential equation solvers to accelerated first-order methods for convex optimizationConvergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz conditionRandom minibatch subgradient algorithms for convex problems with functional constraintsA Kaczmarz Algorithm for Solving Tree Based Distributed Systems of EquationsLinear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex ProblemsOn the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces


Uses Software



Cites Work




This page was built for publication: Linear convergence of first order methods for non-strongly convex optimization