Pages that link to "Item:Q1675251"
From MaRDI portal
The following pages link to From error bounds to the complexity of first-order descent methods for convex functions (Q1675251):
Displaying 50 items.
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth (Q523179) (← links)
- Distributed block-diagonal approximation methods for regularized empirical risk minimization (Q782443) (← links)
- Convergence rates of subgradient methods for quasi-convex optimization problems (Q782917) (← links)
- Primal necessary characterizations of transversality properties (Q830190) (← links)
- Local convergence of the heavy-ball method and iPiano for non-convex optimization (Q1637355) (← links)
- Implicit error bounds for Picard iterations on Hilbert spaces (Q1639952) (← links)
- A family of functional inequalities: Łojasiewicz inequalities and displacement convex functions (Q1653290) (← links)
- Approaching nonsmooth nonconvex optimization problems through first order dynamical systems with hidden acceleration and Hessian driven damping terms (Q1653320) (← links)
- A simple globally convergent algorithm for the nonsmooth nonconvex single source localization problem (Q1685585) (← links)
- Iterative regularization via dual diagonal descent (Q1703168) (← links)
- Dual block-coordinate forward-backward algorithm with application to deconvolution and deinterlacing of video sequences (Q1704003) (← links)
- Extragradient method in optimization: convergence and complexity (Q1706412) (← links)
- Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators (Q1739043) (← links)
- On the proximal gradient algorithm with alternated inertia (Q1752647) (← links)
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods (Q1785009) (← links)
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions (Q2020604) (← links)
- Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems (Q2022292) (← links)
- On the interplay between acceleration and identification for the proximal gradient algorithm (Q2023654) (← links)
- Constraint qualifications for Karush-Kuhn-Tucker conditions in multiobjective optimization (Q2025290) (← links)
- Kurdyka-Łojasiewicz property of zero-norm composite functions (Q2026719) (← links)
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach (Q2031939) (← links)
- On the linear convergence of forward-backward splitting method. I: Convergence analysis (Q2031953) (← links)
- Transversality properties: primal sufficient conditions (Q2045184) (← links)
- Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method (Q2046546) (← links)
- A sufficient condition for asymptotically well behaved property of convex polynomials (Q2060600) (← links)
- Alternating projections with applications to Gerchberg-Saxton error reduction (Q2070399) (← links)
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems (Q2070400) (← links)
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees (Q2084299) (← links)
- Curiosities and counterexamples in smooth convex optimization (Q2089782) (← links)
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition (Q2089864) (← links)
- Avoiding bad steps in Frank-Wolfe variants (Q2111475) (← links)
- Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis (Q2116020) (← links)
- Restarting Frank-Wolfe: faster rates under Hölderian error bounds (Q2116603) (← links)
- Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence (Q2125304) (← links)
- Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition (Q2128612) (← links)
- Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problems (Q2129140) (← links)
- Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces (Q2141725) (← links)
- Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry (Q2149561) (← links)
- Kurdyka-Łojasiewicz exponent via inf-projection (Q2162122) (← links)
- Neural network for nonsmooth pseudoconvex optimization with general convex constraints (Q2179805) (← links)
- Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria (Q2220664) (← links)
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization (Q2235149) (← links)
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions (Q2297652) (← links)
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate (Q2301128) (← links)
- The modified second APG method for DC optimization problems (Q2311112) (← links)
- Moduli of regularity and rates of convergence for Fejér monotone sequences (Q2317680) (← links)
- Convergence rates of forward-Douglas-Rachford splitting method (Q2317846) (← links)
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity (Q2322369) (← links)
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods (Q2330648) (← links)
- Error bounds for parametric polynomial systems with applications to higher-order stability analysis and convergence rates (Q2413090) (← links)