New analysis of linear convergence of gradient-type methods via unifying error bound conditions
From MaRDI portal
Publication:2297652
Abstract: This paper reveals that a common and central role, played in many error bound (EB) conditions and a variety of gradient-type methods, is a residual measure operator. On one hand, by linking this operator with other optimality measures, we define a group of abstract EB conditions, and then analyze the interplay between them; on the other hand, by using this operator as an ascent direction, we propose an abstract gradient-type method, and then derive EB conditions that are necessary and sufficient for its linear convergence. The former provides a unified framework that not only allows us to find new connections between many existing EB conditions, but also paves a way to construct new EB conditions. The latter allows us to claim the weakest conditions guaranteeing linear convergence for a number of fundamental algorithms, including the gradient method, the proximal point algorithm, and the forward-backward splitting algorithm. In addition, we show linear convergence for the proximal alternating linearized minimization algorithm under a group of equivalent EB conditions, which are strictly weaker than the traditional strongly convex condition. Moreover, by defining a new EB condition, we show Q-linear convergence of Nesterov's accelerated forward-backward algorithm without strong convexity. Finally, we verify EB conditions for a class of dual objective functions.
Recommendations
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Conditions for linear convergence of the gradient method for non-convex optimization
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- On linear convergence of gradient-type minimization algorithms
- Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method
Cites work
- scientific article; zbMATH DE number 6378119 (Why is no real title available?)
- scientific article; zbMATH DE number 3313108 (Why is no real title available?)
- scientific article; zbMATH DE number 3371284 (Why is no real title available?)
- scientific article; zbMATH DE number 3398324 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- A dual algorithm for a class of augmented convex signal recovery models
- A proximal-gradient homotopy method for the sparse least-squares problem
- A unified approach to error bounds for structured convex optimization problems
- An introduction to continuous optimization for imaging
- Asymptotic Convergence Analysis of the Proximal Point Algorithm
- Asymptotic convergence of nonlinear contraction semigroups in Hilbert space
- Asynchronous stochastic coordinate descent: parallelism and convergence properties
- Augmented \(\ell_1\) and nuclear-norm models with a globally linearly convergent algorithm
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
- Convergence rates of inertial forward-backward algorithms
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- From error bounds to the complexity of first-order descent methods for convex functions
- Implicit Functions and Solution Mappings
- Introductory lectures on convex optimization. A basic course.
- Iteration complexity analysis of block coordinate descent methods
- Linear convergence of descent methods for the unconstrained minimization of restricted strongly convex functions
- Linear convergence of first order methods for non-strongly convex optimization
- Metric subregularity of the convex subdifferential in Banach spaces
- Nonlinear optimization.
- On equiwellset minimum problems
- On faster convergence of cyclic block coordinate descent-type methods for strongly convex minimization
- On the Convergence of a Regularized Jacobi Algorithm for Convex Optimization
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- On the estimation performance and convergence rate of the generalized power method for phase synchronization
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- Optimized first-order methods for smooth convex minimization
- Performance of first-order methods for smooth convex minimization: a novel approach
- Phase retrieval via Wirtinger flow: theory and algorithms
- Projected shrinkage algorithm for box-constrained \(\ell _1\)-minimization
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Second-order growth, tilt stability, and metric regularity of the subdifferential
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Some continuity properties of polyhedral multifunctions
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
- Variational Analysis
Cited in
(11)- Error bounds, quadratic growth, and linear convergence of proximal methods
- Conditions for linear convergence of the gradient method for non-convex optimization
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
- Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions
- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees
- Revisiting linearized Bregman iterations under Lipschitz-like convexity condition
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry
- Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method
This page was built for publication: New analysis of linear convergence of gradient-type methods via unifying error bound conditions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2297652)