Faster subgradient methods for functions with Hölderian growth
From MaRDI portal
Publication:2297653
Abstract: The purpose of this manuscript is to derive new convergence results for several subgradient methods applied to minimizing nonsmooth convex functions with H"olderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and weakly sharp minima as special cases. To this end there are three main contributions. First, for a constant and sufficiently small stepsize, we show that the subgradient method achieves linear convergence up to a certain region including the optimal set, with error of the order of the stepsize. Second, if appropriate problem parameters are known, we derive a decaying stepsize which obtains a much faster convergence rate than is suggested by the classical result for the subgradient method. Thirdly we develop a novel "descending stairs" stepsize which obtains this faster convergence rate and also obtains linear convergence for the special case of weakly sharp functions. We also develop an adaptive variant of the "descending stairs" stepsize which achieves the same convergence rate without requiring an error bound constant which is difficult to estimate in practice.
Recommendations
- On Convergence Properties of a Subgradient Method
- Subgradient methods for sharp weakly convex functions
- A subgradient method with non-monotone line search
- General Hölder smooth convergence rates follow from specialized rates assuming growth bounds
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
Cites work
- scientific article; zbMATH DE number 4015993 (Why is no real title available?)
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 2155014 (Why is no real title available?)
- A Geometrically Convergent Subgradient Optimization Method for Nonlinearly Constrained Convex Programs
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A three-operator splitting scheme and its optimization applications
- A unified approach to error bounds for structured convex optimization problems
- Activity identification and local linear convergence of forward-backward-type methods
- An \(O(n)\) algorithm for projecting a vector on the intersection of a hyperplane and a box in \(\mathbb R^n\)
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Asymptotic analysis of high-dimensional LAD regression with Lasso smoother
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence of non-smooth descent methods using the Kurdyka-Łojasiewicz inequality
- Convergence rate of incremental subgradient algorithms
- Convex analysis and monotone operator theory in Hilbert spaces
- Coordinate descent algorithms for lasso penalized regression
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds in mathematical programming
- Faster subgradient methods for functions with Hölderian growth
- Finite termination of the proximal point algorithm
- First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
- Global error bounds for piecewise convex polynomials
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- Nonlinear programming methods in the presence of noise
- On convergence rates of subgradient optimization methods
- On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions
- On the convergence rate for stochastic approximation in the nonsmooth setting
- Online Learning with Kernels
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Robust Stochastic Approximation Approach to Stochastic Programming
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
- The effect of deterministic noise in subgradient methods
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Weak Sharp Minima in Mathematical Programming
- Weak sharp minima revisited. I: Basic theory
- ``Efficient subgradient methods for general convex optimization
Cited in
(20)- On the simplicity and conditioning of low rank semidefinite programs
- On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
- Stochastic algorithms with geometric step decay converge linearly on sharp functions
- General Hölder smooth convergence rates follow from specialized rates assuming growth bounds
- Subgradient methods for sharp weakly convex functions
- A simple nearly optimal restart scheme for speeding up first-order methods
- Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Faster subgradient methods for functions with Hölderian growth
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
- An optimal-storage approach to semidefinite programming using approximate complementarity
- Exact penalties for decomposable convex optimization problems
- A superlinearly convergent subgradient method for sharp semismooth problems
- Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition
- A subgradient method with constant step-size for \(\ell_1\)-composite optimization
- Radial duality. II: Applications and algorithms
- Tight analyses for subgradient descent. I: Lower bounds
- A strict complementarity approach to error bound and sensitivity of solution of conic programs
- On Convergence Properties of a Subgradient Method
- Accelerate stochastic subgradient method by leveraging local growth condition
This page was built for publication: Faster subgradient methods for functions with Hölderian growth
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2297653)