The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
From MaRDI portal
Publication:523179
DOI10.1007/s11590-016-1058-9zbMath1378.90067arXiv1511.01635OpenAlexW2229440883MaRDI QIDQ523179
Publication date: 20 April 2017
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1511.01635
linear convergenceglobal error boundgradient mappingquadratic growth propertyrestricted strong convexity
Related Items (11)
Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition ⋮ Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling ⋮ A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize ⋮ Newton-MR: inexact Newton method with minimum residual sub-problem solver ⋮ On the convergence rate of Fletcher‐Reeves nonlinear conjugate gradient methods satisfying strong Wolfe conditions: Application to parameter identification in problems governed by general dynamics ⋮ Variance reduction for root-finding problems ⋮ Distributed Nash equilibrium seeking under partial-decision information via the alternating direction method of multipliers ⋮ New analysis of linear convergence of gradient-type methods via unifying error bound conditions ⋮ Some characterizations of error bound for non-lower semicontinuous functions ⋮ Accelerate stochastic subgradient method by leveraging local growth condition ⋮ Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Introductory lectures on convex optimization. A basic course.
- From error bounds to the complexity of first-order descent methods for convex functions
- Bounds for error in the solution set of a perturbed linear program
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- Linear convergence of first order methods for non-strongly convex optimization
- Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
- Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
This page was built for publication: The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth