The effect of deterministic noise in subgradient methods
From MaRDI portal
Publication:1960191
DOI10.1007/s10107-008-0262-5zbMath1205.90225OpenAlexW2172228337MaRDI QIDQ1960191
Dimitri P. Bertsekas, Angelia Nedić
Publication date: 13 October 2010
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-008-0262-5
Convex programming (90C25) Derivative-free methods and methods using generalized derivatives (90C56) Sensitivity, stability, parametric optimization (90C31)
Related Items
Subgradient method with feasible inexact projections for constrained convex optimization problems, Stochastic derivative-free optimization using a trust region framework, Inexact subgradient methods for quasi-convex optimization problems, Convergence of inexact quasisubgradient methods with extrapolation, Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling, On finite termination of an inexact proximal point algorithm, Stochastic mirror descent method for distributed multi-agent optimization, A redistributed proximal bundle method for nonsmooth nonconvex functions with inexact information, First-order methods of smooth convex optimization with inexact oracle, Adaptive Bundle Methods for Nonlinear Robust Optimization, A proximal bundle method for constrained nonsmooth nonconvex optimization with inexact information, A subgradient method with non-monotone line search, Incremental proximal methods for large scale convex optimization, An infeasible-point subgradient method using adaptive approximate projections, Generalised gossip-based subgradient method for distributed optimisation, Zero-convex functions, perturbation resilience, and subgradient projections for feasibility-seeking methods, A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling, Fault tolerant distributed portfolio optimization in smart grids, Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions, Bundle Method for Non-Convex Minimization with Inexact Subgradients and Function Values, A study on distributed optimization over large-scale networked systems, A proximal bundle method for nonsmooth nonconvex functions with inexact information, Faster subgradient methods for functions with Hölderian growth, Minimizing Piecewise-Concave Functions Over Polyhedra, Abstract convergence theorem for quasi-convex optimization problems with applications, A splitting bundle approach for non-smooth non-convex minimization, Weak subgradient method for solving nonsmooth nonconvex optimization problems, Distributed optimization with inexact oracle
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Decomposition into functions in the minimization problem
- Error stability properties of generalized gradient-type algorithms
- Convergence of a simple subgradient level method
- Error bounds in mathematical programming
- Quantized consensus
- The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Weak Sharp Minima in Mathematical Programming
- stochastic quasigradient methods and their application to system optimization†
- Nonlinear programming methods in the presence of noise
- Distributed Average Consensus With Dithered Quantization
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- An Incremental Method for Solving Convex Finite Min-Max Problems
- A Proximal Bundle Method with Approximate Subgradient Linearizations