Incremental gradient algorithms with stepsizes bounded away from zero
From MaRDI portal
Recommendations
Cited in
(33)- Global convergence of the Dai-Yuan conjugate gradient method with perturbations
- On the convergence of a block-coordinate incremental gradient method
- Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- The Averaged Kaczmarz Iteration for Solving Inverse Problems
- An incremental decomposition method for unconstrained optimization
- Stochastic subgradient algorithm for nonsmooth nonconvex optimization
- A New Class of Incremental Gradient Methods for Least Squares Problems
- A globally convergent incremental Newton method
- String-averaging incremental stochastic subgradient algorithms
- Incrementally updated gradient methods for constrained and regularized optimization
- Incremental quasi-Newton algorithms for solving a nonconvex, nonsmooth, finite-sum optimization problem
- Descent methods with linesearch in the presence of perturbations
- Robust inversion, dimensionality reduction, and randomized sampling
- Convergence analysis of perturbed feasible descent methods
- A scaled incremental gradient method
- A smooth inexact penalty reformulation of convex problems with linear constraints
- Incremental proximal methods for large scale convex optimization
- On perturbed steepest descent methods with inexact line search for bilevel convex optimization
- Random algorithms for convex minimization problems
- On the linear convergence of the stochastic gradient method with constant step-size
- Global convergence rate of proximal incremental aggregated gradient methods
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality
- Error stability properties of generalized gradient-type algorithms
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Sliced and Radon Wasserstein barycenters of measures
- Convergence rate of incremental gradient and incremental Newton methods
- Network synchronization with convexity
- Incremental gradient-free method for nonsmooth distributed optimization
- Recent Theoretical Advances in Non-Convex Optimization
- Convergence properties of proximal (sub)gradient methods without convexity or smoothness of any of the functions
- Minimizing finite sums with the stochastic average gradient
- A framework for parallel second order incremental optimization algorithms for solving partially separable problems
This page was built for publication: Incremental gradient algorithms with stepsizes bounded away from zero
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1273418)