Convergence rate of incremental subgradient algorithms
From MaRDI portal
Publication:2752037
Recommendations
- Incremental subgradient methods for nondifferentiable optimization
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- scientific article; zbMATH DE number 5221408
- Incremental proximal methods for large scale convex optimization
Cited in
(54)- Path-based incremental target level algorithm on Riemannian manifolds
- Adaptive sequential sample average approximation for solving two-stage stochastic linear programs
- A trust region method for noisy unconstrained optimization
- Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations
- Subgradient methods for huge-scale optimization problems
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step
- Analysis of the BFGS Method with Errors
- Subgradient methods for saddle-point problems
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
- Semi-discrete optimal transport: hardness, regularization and numerical solution
- A globally convergent incremental Newton method
- An incremental subgradient method on Riemannian manifolds
- A subgradient method with non-monotone line search
- Subgradient method with feasible inexact projections for constrained convex optimization problems
- A review of decentralized optimization focused on information flows of decomposition algorithms
- Distributed stochastic subgradient projection algorithms for convex optimization
- Stochastic algorithms with geometric step decay converge linearly on sharp functions
- Inexact first-order primal-dual algorithms
- An optimal randomized incremental gradient method
- Incremental proximal methods for large scale convex optimization
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Stochastic first-order methods with random constraint projection
- A robust multi-batch L-BFGS method for machine learning
- Hybrid deterministic-stochastic methods for data fitting
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Discrete-time gradient flows and law of large numbers in Alexandrov spaces
- Strong law of large numbers for generalized operator means
- Why random reshuffling beats stochastic gradient descent
- Incremental stochastic subgradient algorithms for convex optimization
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Convergence analysis of inexact randomized iterative methods
- Modified Fejér sequences and applications
- Lagrangian relaxation of the generic materials and operations planning model
- A stochastic quasi-Newton method for large-scale optimization
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Convergence rate of incremental gradient and incremental Newton methods
- The effect of deterministic noise in subgradient methods
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Strong law of large numbers for the \(L^1\)-Karcher mean
- scientific article; zbMATH DE number 7626722 (Why is no real title available?)
- Weakly convex optimization over Stiefel manifold using Riemannian subgradient-type methods
- Incremental gradient-free method for nonsmooth distributed optimization
- Incremental subgradient methods for nondifferentiable optimization
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Faster subgradient methods for functions with Hölderian growth
- Nonconvex Robust Low-Rank Matrix Recovery
- Minimizing finite sums with the stochastic average gradient
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Convergence analysis of incremental and parallel line search subgradient methods in Hilbert space
This page was built for publication: Convergence rate of incremental subgradient algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2752037)