Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization

From MaRDI portal
Publication:2355319

DOI10.1007/s11590-014-0795-xzbMath1350.90029OpenAlexW2035507034MaRDI QIDQ2355319

Li-Zhi Cheng, Hui Zhang

Publication date: 22 July 2015

Published in: Optimization Letters (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s11590-014-0795-x




Related Items (17)

Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound conditionExponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphsDistributed smooth optimisation with event-triggered proportional-integral algorithmsZeroth-order algorithms for stochastic distributed nonconvex optimizationA one-bit, comparison-based gradient estimatorExact worst-case convergence rates of the proximal gradient method for composite convex minimizationLinear convergence of first order methods for non-strongly convex optimizationNewton-MR: inexact Newton method with minimum residual sub-problem solverA Generalization of Wirtinger Flow for Exact Interferometric InversionLinear convergence of the randomized sparse Kaczmarz methodA linearly convergent stochastic recursive gradient method for convex optimizationThe restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growthRSG: Beating Subgradient Method without Smoothness and Strong ConvexityNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsLinear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex FunctionsOn the linear convergence of the stochastic gradient method with constant step-sizeOn the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces



Cites Work


This page was built for publication: Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization