Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions
From MaRDI portal
Publication:6167107
DOI10.1007/s11401-023-0024-yarXiv2012.00628OpenAlexW3106830331MaRDI QIDQ6167107
Publication date: 7 July 2023
Published in: Chinese Annals of Mathematics. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2012.00628
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Stochastic approximation methods for constrained and unconstrained systems
- Introductory lectures on convex optimization. A basic course.
- Stochastic heavy ball
- Analytical convergence regions of accelerated gradient descent in nonconvex optimization under regularity condition
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- An Invariant Measure Approach to the Convergence of Stochastic Approximations with State Dependent Noise
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Acceleration of Stochastic Approximation by Averaging
- Analysis of recursive stochastic algorithms
- Convergence of stochastic algorithms: from the Kushner–Clark theorem to the Lyapounov functional method
- Gradient Convergence in Gradient methods with Errors
- Optimization Methods for Large-Scale Machine Learning
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Some methods of speeding up the convergence of iteration methods
- Stochastic Estimation of the Maximum of a Regression Function
- A Stochastic Approximation Method
- The method of steepest descent for non-linear minimization problems
This page was built for publication: Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions