Linear convergence of epsilon-subgradient descent methods for a class of convex functions
DOI10.1007/S101070050078zbMATH Open1029.90056OpenAlexW1979459040MaRDI QIDQ1806023FDOQ1806023
Authors: Stephen M. Robinson
Publication date: 1 February 2004
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: http://pure.iiasa.ac.at/id/eprint/4985/1/WP-96-041.pdf
Recommendations
- Convergence properties of a conditional \(\varepsilon\)-subgradient method applied to linear programs
- Convergence of some algorithms for convex minimization
- A general approach to convergence properties of some methods for nonsmooth convex optimization
- Publication:4723574
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
convex functionsproximal point methodresolvent methodbundle methodlinear convergence ratebundle-trust region methodepsilon-subgradient descent methods
Convex programming (90C25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Nonsmooth analysis (49J52)
Cited In (21)
- Nomonotone spectral gradient method for sparse recovery
- Linear convergence of the derivative-free proximal bundle method on convex nonsmooth functions, with application to the derivative-free \(\mathcal{VU}\)-algorithm
- On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
- Subgradient and bundle methods for nonsmooth optimization
- Gradient-based method with active set strategy for $\ell _1$ optimization
- A Unified Analysis of Descent Sequences in Weakly Convex Optimization, Including Convergence Rates for Bundle Methods
- Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization
- Title not available (Why is that?)
- Computational efficiency of the simplex embedding method in convex nondifferentiable optimization
- New approach to the \(\eta \)-proximal point algorithm and nonlinear variational inclusion problems
- On Rockafellar's theorem using proximal point algorithm involving \(H\)-maximal monotonicity framework
- Comparing different nonsmooth minimization methods and software
- Generalized Eckstein-Bertsekas proximal point algorithm involving \((H,\eta )\)-monotonicity framework
- Super-relaxed \((\eta)\)-proximal point algorithms, relaxed \((\eta)\)-proximal point algorithms, linear convergence analysis, and nonlinear variational inclusions
- Weak convexity and approximate subdifferentials
- Generalized Eckstein-Bertsekas proximal point algorithm based ona-maximal monotonicity design
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Scaling techniques for \(\epsilon\)-subgradient methods
- A coordinate gradient descent method for nonsmooth separable minimization
- On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration
This page was built for publication: Linear convergence of epsilon-subgradient descent methods for a class of convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1806023)