Pages that link to "Item:Q2414900"
From MaRDI portal
The following pages link to Linear convergence of first order methods for non-strongly convex optimization (Q2414900):
Displaying 50 items.
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth (Q523179) (← links)
- Distributed block-diagonal approximation methods for regularized empirical risk minimization (Q782443) (← links)
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization (Q1670100) (← links)
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization (Q1734766) (← links)
- Linear convergence rates for variants of the alternating direction method of multipliers in smooth cases (Q1743535) (← links)
- New characterizations of Hoffman constants for systems of linear constraints (Q2020601) (← links)
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions (Q2020604) (← links)
- On the linear convergence of forward-backward splitting method. I: Convergence analysis (Q2031953) (← links)
- General convergence analysis of stochastic first-order methods for composite optimization (Q2032020) (← links)
- The condition number of a function relative to a set (Q2039239) (← links)
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization (Q2041515) (← links)
- Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method (Q2046546) (← links)
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization (Q2062324) (← links)
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems (Q2070400) (← links)
- Convergence results of a nested decentralized gradient method for non-strongly convex problems (Q2082236) (← links)
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\) (Q2082553) (← links)
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees (Q2084299) (← links)
- From differential equation solvers to accelerated first-order methods for convex optimization (Q2089788) (← links)
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition (Q2089864) (← links)
- On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces (Q2115326) (← links)
- Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence (Q2125304) (← links)
- Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition (Q2128612) (← links)
- Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis (Q2133415) (← links)
- Exponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphs (Q2139374) (← links)
- Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry (Q2149561) (← links)
- Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs (Q2154842) (← links)
- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions (Q2200083) (← links)
- Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM (Q2279378) (← links)
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity (Q2280701) (← links)
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions (Q2297652) (← links)
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate (Q2301128) (← links)
- Provable accelerated gradient method for nonconvex low rank optimization (Q2303662) (← links)
- Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis (Q2311982) (← links)
- Random minibatch subgradient algorithms for convex problems with functional constraints (Q2338088) (← links)
- Inexact successive quadratic approximation for regularized optimization (Q2419525) (← links)
- A linearly convergent doubly stochastic Gauss-Seidel algorithm for solving linear equations and a certain class of over-parameterized optimization problems (Q2425182) (← links)
- Computation of the maximal invariant set of discrete-time linear systems subject to a class of non-convex constraints (Q2663960) (← links)
- New results on multi-dimensional linear discriminant analysis (Q2670460) (← links)
- Quadratic growth conditions and uniqueness of optimal solution to Lasso (Q2671439) (← links)
- Convergence rates of the heavy-ball method under the Łojasiewicz property (Q2687044) (← links)
- A globally convergent proximal Newton-type method in nonsmooth convex optimization (Q2687066) (← links)
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry (Q2687067) (← links)
- General Hölder smooth convergence rates follow from specialized rates assuming growth bounds (Q2696991) (← links)
- Accelerating Stochastic Composition Optimization (Q4637024) (← links)
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods (Q4641660) (← links)
- On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent (Q4969070) (← links)
- (Q4969246) (← links)
- Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions (Q4991666) (← links)
- (Q4998961) (← links)
- A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations (Q5020146) (← links)