The effect of deterministic noise in subgradient methods
From MaRDI portal
Publication:1960191
DOI10.1007/S10107-008-0262-5zbMATH Open1205.90225OpenAlexW2172228337MaRDI QIDQ1960191FDOQ1960191
Authors: Dimitri P. Bertsekas, Angelia Nedić
Publication date: 13 October 2010
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-008-0262-5
Recommendations
- Incremental subgradient methods for nondifferentiable optimization
- Inexact subgradient methods for quasi-convex optimization problems
- Convergence rate of incremental subgradient algorithms
- On convergence of the stochastic subgradient method with on-line stepsize rules
- Incremental stochastic subgradient algorithms for convex optimization
Convex programming (90C25) Sensitivity, stability, parametric optimization (90C31) Derivative-free methods and methods using generalized derivatives (90C56)
Cites Work
- The ordered subsets mirror descent optimization method with applications to tomography
- Title not available (Why is that?)
- Quantized consensus
- Title not available (Why is that?)
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Error bounds in mathematical programming
- A Proximal Bundle Method with Approximate Subgradient Linearizations
- Weak Sharp Minima in Mathematical Programming
- Title not available (Why is that?)
- Title not available (Why is that?)
- Decomposition into functions in the minimization problem
- Incremental subgradient methods for nondifferentiable optimization
- An Incremental Method for Solving Convex Finite Min-Max Problems
- Convergence of a simple subgradient level method
- stochastic quasigradient methods and their application to system optimization†
- Error stability properties of generalized gradient-type algorithms
- Nonlinear programming methods in the presence of noise
- Distributed Average Consensus With Dithered Quantization
- Distributed asynchronous incremental subgradient methods
- Title not available (Why is that?)
Cited In (31)
- First-order methods of smooth convex optimization with inexact oracle
- On finite termination of an inexact proximal point algorithm
- Minimizing Piecewise-Concave Functions Over Polyhedra
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling
- A proximal bundle method for constrained nonsmooth nonconvex optimization with inexact information
- On the convergence of gradient-like flows with noisy gradient input
- Subgradient method with feasible inexact projections for constrained convex optimization problems
- Distributed optimization with inexact oracle
- A subgradient method with non-monotone line search
- A study on distributed optimization over large-scale networked systems
- Stochastic derivative-free optimization using a trust region framework
- A proximal bundle method for nonsmooth nonconvex functions with inexact information
- Convergence of inexact quasisubgradient methods with extrapolation
- Incremental proximal methods for large scale convex optimization
- A redistributed proximal bundle method for nonsmooth nonconvex functions with inexact information
- Zeroth-order regularized optimization (ZORO): approximately sparse gradients and adaptive sampling
- An infeasible-point subgradient method using adaptive approximate projections
- Adaptive Bundle Methods for Nonlinear Robust Optimization
- Stochastic mirror descent method for distributed multi-agent optimization
- Abstract convergence theorem for quasi-convex optimization problems with applications
- Incremental stochastic subgradient algorithms for convex optimization
- A splitting bundle approach for non-smooth non-convex minimization
- Fault tolerant distributed portfolio optimization in smart grids
- Zero-convex functions, perturbation resilience, and subgradient projections for feasibility-seeking methods
- Bundle method for non-convex minimization with inexact subgradients and function values
- Generalised gossip-based subgradient method for distributed optimisation
- Weak subgradient method for solving nonsmooth nonconvex optimization problems
- Faster subgradient methods for functions with Hölderian growth
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Inexact subgradient methods for quasi-convex optimization problems
This page was built for publication: The effect of deterministic noise in subgradient methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1960191)