Faster subgradient methods for functions with Hölderian growth
From MaRDI portal
Publication:2297653
DOI10.1007/S10107-018-01361-0zbMATH Open1461.65161OpenAlexW3104046271MaRDI QIDQ2297653FDOQ2297653
Authors: Patrick R. Johnstone, Pierre Moulin
Publication date: 20 February 2020
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Abstract: The purpose of this manuscript is to derive new convergence results for several subgradient methods applied to minimizing nonsmooth convex functions with H"olderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and weakly sharp minima as special cases. To this end there are three main contributions. First, for a constant and sufficiently small stepsize, we show that the subgradient method achieves linear convergence up to a certain region including the optimal set, with error of the order of the stepsize. Second, if appropriate problem parameters are known, we derive a decaying stepsize which obtains a much faster convergence rate than is suggested by the classical result for the subgradient method. Thirdly we develop a novel "descending stairs" stepsize which obtains this faster convergence rate and also obtains linear convergence for the special case of weakly sharp functions. We also develop an adaptive variant of the "descending stairs" stepsize which achieves the same convergence rate without requiring an error bound constant which is difficult to estimate in practice.
Full work available at URL: https://arxiv.org/abs/1704.00196
Recommendations
- On Convergence Properties of a Subgradient Method
- Subgradient methods for sharp weakly convex functions
- A subgradient method with non-monotone line search
- General Hölder smooth convergence rates follow from specialized rates assuming growth bounds
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonlinear programming (90C30)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Geometrically Convergent Subgradient Optimization Method for Nonlinearly Constrained Convex Programs
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A three-operator splitting scheme and its optimization applications
- A unified approach to error bounds for structured convex optimization problems
- Activity identification and local linear convergence of forward-backward-type methods
- An \(O(n)\) algorithm for projecting a vector on the intersection of a hyperplane and a box in \(\mathbb R^n\)
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Asymptotic analysis of high-dimensional LAD regression with Lasso smoother
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence of non-smooth descent methods using the Kurdyka-Łojasiewicz inequality
- Convergence rate of incremental subgradient algorithms
- Convex analysis and monotone operator theory in Hilbert spaces
- Coordinate descent algorithms for lasso penalized regression
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds in mathematical programming
- Faster subgradient methods for functions with Hölderian growth
- Finite termination of the proximal point algorithm
- First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
- Global error bounds for piecewise convex polynomials
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- Nonlinear programming methods in the presence of noise
- On convergence rates of subgradient optimization methods
- On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions
- On the convergence rate for stochastic approximation in the nonsmooth setting
- Online Learning with Kernels
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Robust Stochastic Approximation Approach to Stochastic Programming
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
- The effect of deterministic noise in subgradient methods
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Weak Sharp Minima in Mathematical Programming
- Weak sharp minima revisited. I: Basic theory
- ``Efficient subgradient methods for general convex optimization
Cited In (20)
- A subgradient method with constant step-size for \(\ell_1\)-composite optimization
- Exact penalties for decomposable convex optimization problems
- Subgradient methods for sharp weakly convex functions
- Stochastic algorithms with geometric step decay converge linearly on sharp functions
- A strict complementarity approach to error bound and sensitivity of solution of conic programs
- On the simplicity and conditioning of low rank semidefinite programs
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
- A superlinearly convergent subgradient method for sharp semismooth problems
- On Convergence Properties of a Subgradient Method
- Tight analyses for subgradient descent. I: Lower bounds
- General Hölder smooth convergence rates follow from specialized rates assuming growth bounds
- A simple nearly optimal restart scheme for speeding up first-order methods
- On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
- Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition
- Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Faster subgradient methods for functions with Hölderian growth
- Accelerate stochastic subgradient method by leveraging local growth condition
- An optimal-storage approach to semidefinite programming using approximate complementarity
- Radial duality. II: Applications and algorithms
This page was built for publication: Faster subgradient methods for functions with Hölderian growth
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2297653)