Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
From MaRDI portal
Publication:3586148
DOI10.1137/070711712zbMath1207.65082OpenAlexW1965013875MaRDI QIDQ3586148
Elias Salomão Helou Neto, Alvaro Rodolfo de Pierro
Publication date: 6 September 2010
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/070711712
convergenceprojection methodssubgradient methodsconvex feasibility problemnonsmooth convex optimizationincremental subgradient
Related Items (23)
Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings ⋮ Distributed multi-task classification: a decentralized online learning approach ⋮ Proximal point algorithms for nonsmooth convex optimization with fixed point constraints ⋮ A new step size rule for the superiorization method and its application in computerized tomography ⋮ A unified treatment of some perturbed fixed point iterative methods with an infinite pool of operators ⋮ Stochastic approximation with discontinuous dynamics, differential inclusions, and applications ⋮ On the computational efficiency of subgradient methods: a case study with Lagrangian bounds ⋮ Bounded perturbations resilient iterative methods for linear systems and least squares problems: operator-based approaches, analysis, and performance evaluation ⋮ Incremental proximal methods for large scale convex optimization ⋮ An infeasible-point subgradient method using adaptive approximate projections ⋮ String-averaging incremental stochastic subgradient algorithms ⋮ Derivative-free superiorization with component-wise perturbations ⋮ A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints ⋮ Projected subgradient minimization versus superiorization ⋮ Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions ⋮ String-averaging projected subgradient methods for constrained minimization ⋮ Bounded perturbation resilience of projected scaled gradient methods ⋮ Decentralized hierarchical constrained convex optimization ⋮ Abstract convergence theorem for quasi-convex optimization problems with applications ⋮ Dykstra's splitting and an approximate proximal point algorithm for minimizing the sum of convex functions ⋮ On perturbed steepest descent methods with inexact line search for bilevel convex optimization ⋮ Convergence rates of subgradient methods for quasi-convex optimization problems ⋮ The incremental subgradient methods on distributed estimations in-network
This page was built for publication: Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods