Convergence rate of incremental subgradient algorithms
From MaRDI portal
Publication:2752037
zbMATH Open0984.90033MaRDI QIDQ2752037FDOQ2752037
Authors: Angelia Nedić, Dimitri P. Bertsekas
Publication date: 14 May 2002
Recommendations
- Incremental subgradient methods for nondifferentiable optimization
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- scientific article; zbMATH DE number 5221408
- Incremental proximal methods for large scale convex optimization
convergenceconvex functionconvex programmingLagrangian relaxationnondifferentiable optimizationsubgradient algorithmsstochastic subgradient methodssubgradient iteration
Convex programming (90C25) Stochastic programming (90C15) Convex functions and convex programs in convex geometry (52A41)
Cited In (54)
- A trust region method for noisy unconstrained optimization
- Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations
- Subgradient methods for huge-scale optimization problems
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step
- Analysis of the BFGS Method with Errors
- Subgradient methods for saddle-point problems
- Semi-discrete optimal transport: hardness, regularization and numerical solution
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
- A globally convergent incremental Newton method
- An incremental subgradient method on Riemannian manifolds
- A review of decentralized optimization focused on information flows of decomposition algorithms
- Subgradient method with feasible inexact projections for constrained convex optimization problems
- A subgradient method with non-monotone line search
- Stochastic algorithms with geometric step decay converge linearly on sharp functions
- Distributed stochastic subgradient projection algorithms for convex optimization
- Inexact first-order primal-dual algorithms
- An optimal randomized incremental gradient method
- Incremental proximal methods for large scale convex optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- A robust multi-batch L-BFGS method for machine learning
- Stochastic first-order methods with random constraint projection
- Hybrid deterministic-stochastic methods for data fitting
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Discrete-time gradient flows and law of large numbers in Alexandrov spaces
- Strong law of large numbers for generalized operator means
- Why random reshuffling beats stochastic gradient descent
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Incremental stochastic subgradient algorithms for convex optimization
- Convergence analysis of inexact randomized iterative methods
- Modified Fejér sequences and applications
- Lagrangian relaxation of the generic materials and operations planning model
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Convergence rate of incremental gradient and incremental Newton methods
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- A stochastic quasi-Newton method for large-scale optimization
- The effect of deterministic noise in subgradient methods
- Title not available (Why is that?)
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Strong law of large numbers for the \(L^1\)-Karcher mean
- Weakly convex optimization over Stiefel manifold using Riemannian subgradient-type methods
- Incremental gradient-free method for nonsmooth distributed optimization
- Incremental subgradient methods for nondifferentiable optimization
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Faster subgradient methods for functions with Hölderian growth
- Nonconvex Robust Low-Rank Matrix Recovery
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Minimizing finite sums with the stochastic average gradient
- Convergence analysis of incremental and parallel line search subgradient methods in Hilbert space
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Adaptive sequential sample average approximation for solving two-stage stochastic linear programs
- Path-based incremental target level algorithm on Riemannian manifolds
This page was built for publication: Convergence rate of incremental subgradient algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2752037)