Pages that link to "Item:Q2297652"
From MaRDI portal
The following pages link to New analysis of linear convergence of gradient-type methods via unifying error bound conditions (Q2297652):
Displaying 9 items.
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization (Q2033403) (← links)
- Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method (Q2046546) (← links)
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems (Q2070400) (← links)
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees (Q2084299) (← links)
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition (Q2089864) (← links)
- Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry (Q2149561) (← links)
- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions (Q2200083) (← links)
- Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions (Q4991666) (← links)
- Revisiting linearized Bregman iterations under Lipschitz-like convexity condition (Q5058656) (← links)