Error stability properties of generalized gradient-type algorithms
From MaRDI portal
Publication:1273917
DOI10.1023/A:1022680114518zbMath0913.90245OpenAlexW1551892507MaRDI QIDQ1273917
S. K. Zavriev, Mikhail V. Solodov
Publication date: 31 May 1999
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1023/a:1022680114518
convergence analysisapproximate solutionsperturbationsincremental algorithmsgradient-type methodserror stabilitygeneralized subgradient-type algorithmsstrongly convex problems
Related Items (32)
Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces ⋮ Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings ⋮ Proximal point algorithms for nonsmooth convex optimization with fixed point constraints ⋮ Descent methods with linesearch in the presence of perturbations ⋮ On approximations with finite precision in bundle methods for nonsmooth optimization ⋮ Convergence analysis of perturbed feasible descent methods ⋮ Spectral projected subgradient with a momentum term for the Lagrangean dual approach ⋮ Detection of iterative adversarial attacks via counter attack ⋮ A redistributed proximal bundle method for nonsmooth nonconvex functions with inexact information ⋮ Distributed stochastic subgradient projection algorithms for convex optimization ⋮ New proximal bundle algorithm based on the gradient sampling method for nonsmooth nonconvex optimization with exact and inexact information ⋮ First order inertial optimization algorithms with threshold effects associated with dry friction ⋮ Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality ⋮ On the computational efficiency of subgradient methods: a case study with Lagrangian bounds ⋮ On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems. ⋮ A proximal bundle method for constrained nonsmooth nonconvex optimization with inexact information ⋮ Incremental proximal methods for large scale convex optimization ⋮ The effect of deterministic noise in subgradient methods ⋮ String-averaging incremental stochastic subgradient algorithms ⋮ The Extragradient Method for Convex Optimization in the Presence of Computational Errors ⋮ Zero-convex functions, perturbation resilience, and subgradient projections for feasibility-seeking methods ⋮ An incremental subgradient method on Riemannian manifolds ⋮ Incremental-like bundle methods with application to energy planning ⋮ The Projected Subgradient Method for Nonsmooth Convex Optimization in the Presence of Computational Errors ⋮ A proximal bundle method for nonsmooth nonconvex functions with inexact information ⋮ Bounded perturbation resilience of projected scaled gradient methods ⋮ Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization ⋮ A merit function approach to the subgradient method with averaging ⋮ Scaling Techniques for $\epsilon$-Subgradient Methods ⋮ On perturbed steepest descent methods with inexact line search for bilevel convex optimization ⋮ On the projected subgradient method for nonsmooth convex optimization in a Hilbert space ⋮ Weak subgradient method for solving nonsmooth nonconvex optimization problems
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nondifferential optimization via adaptive smoothing
- Incremental gradient algorithms with stepsizes bounded away from zero
- On the projected subgradient method for nonsmooth convex optimization in a Hilbert space
- Descent methods with linesearch in the presence of perturbations
- New inexact parallel variable distribution algorithms
- Convergence analysis of perturbed feasible descent methods
- Weak Sharp Minima in Mathematical Programming
- The direct Lyapunov method in investigating the attraction of trajectories of finite-difference inclusions
- Optimization and nonsmooth analysis
- Mathematical Programming in Neural Networks
- A New Class of Incremental Gradient Methods for Least Squares Problems
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Convergence properties of the gradient method under conditions of variable-level interference
- Incremental Least Squares Methods and the Extended Kalman Filter
This page was built for publication: Error stability properties of generalized gradient-type algorithms