Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
From MaRDI portal
Publication:2355319
DOI10.1007/s11590-014-0795-xzbMath1350.90029OpenAlexW2035507034MaRDI QIDQ2355319
Publication date: 22 July 2015
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11590-014-0795-x
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Methods of reduced gradient type (90C52)
Related Items (17)
Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition ⋮ Exponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphs ⋮ Distributed smooth optimisation with event-triggered proportional-integral algorithms ⋮ Zeroth-order algorithms for stochastic distributed nonconvex optimization ⋮ A one-bit, comparison-based gradient estimator ⋮ Exact worst-case convergence rates of the proximal gradient method for composite convex minimization ⋮ Linear convergence of first order methods for non-strongly convex optimization ⋮ Newton-MR: inexact Newton method with minimum residual sub-problem solver ⋮ A Generalization of Wirtinger Flow for Exact Interferometric Inversion ⋮ Linear convergence of the randomized sparse Kaczmarz method ⋮ A linearly convergent stochastic recursive gradient method for convex optimization ⋮ The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth ⋮ RSG: Beating Subgradient Method without Smoothness and Strong Convexity ⋮ New analysis of linear convergence of gradient-type methods via unifying error bound conditions ⋮ Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions ⋮ On the linear convergence of the stochastic gradient method with constant step-size ⋮ On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Sample size selection in optimization methods for machine learning
- Descent methods for convex essentially smooth minimization
- Introductory lectures on convex optimization. A basic course.
- Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
- Remarks on Convergence of the Matrix Splitting Algorithm for the Symmetric Linear Complementarity Problem
- A Convergent Incremental Gradient Method with a Constant Step Size
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
This page was built for publication: Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization