Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods

From MaRDI portal
Publication:3586148

DOI10.1137/070711712zbMath1207.65082OpenAlexW1965013875MaRDI QIDQ3586148

Elias Salomão Helou Neto, Alvaro Rodolfo de Pierro

Publication date: 6 September 2010

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/070711712




Related Items (23)

Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappingsDistributed multi-task classification: a decentralized online learning approachProximal point algorithms for nonsmooth convex optimization with fixed point constraintsA new step size rule for the superiorization method and its application in computerized tomographyA unified treatment of some perturbed fixed point iterative methods with an infinite pool of operatorsStochastic approximation with discontinuous dynamics, differential inclusions, and applicationsOn the computational efficiency of subgradient methods: a case study with Lagrangian boundsBounded perturbations resilient iterative methods for linear systems and least squares problems: operator-based approaches, analysis, and performance evaluationIncremental proximal methods for large scale convex optimizationAn infeasible-point subgradient method using adaptive approximate projectionsString-averaging incremental stochastic subgradient algorithmsDerivative-free superiorization with component-wise perturbationsA Smooth Inexact Penalty Reformulation of Convex Problems with Linear ConstraintsProjected subgradient minimization versus superiorizationIncremental quasi-subgradient methods for minimizing the sum of quasi-convex functionsString-averaging projected subgradient methods for constrained minimizationBounded perturbation resilience of projected scaled gradient methodsDecentralized hierarchical constrained convex optimizationAbstract convergence theorem for quasi-convex optimization problems with applicationsDykstra's splitting and an approximate proximal point algorithm for minimizing the sum of convex functionsOn perturbed steepest descent methods with inexact line search for bilevel convex optimizationConvergence rates of subgradient methods for quasi-convex optimization problemsThe incremental subgradient methods on distributed estimations in-network




This page was built for publication: Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods