Stopping criteria for, and strong convergence of, stochastic gradient descent on Bottou-Curtis-Nocedal functions
DOI10.1007/S10107-021-01710-6zbMATH Open1505.65219arXiv2004.00475OpenAlexW3208582873MaRDI QIDQ2089787FDOQ2089787
Authors: Vivak Patel
Publication date: 24 October 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.00475
Recommendations
- Convergence rates for the stochastic gradient descent method for non-convex objective functions
- Convergence of constant step stochastic gradient descent for non-smooth non-convex functions
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- Stochastic gradient descent with Polyak's learning rate
- Stochastic subgradient method converges on tame functions
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Stochastic approximation (62L20) Nonlinear programming (90C30) Stochastic programming (90C15)
Cites Work
- Asymptotic Statistics
- A Stochastic Approximation Method
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Title not available (Why is that?)
- Probability. Theory and examples.
- Mixed effects models for complex data
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Stochastic Approximation of Minima with Improved Asymptotic Speed
- Title not available (Why is that?)
- Stochastic Estimation of the Maximum of a Regression Function
- stochastic quasigradient methods and their application to system optimization†
- On a Stochastic Approximation Method
- On a new stopping rule for stochastic approximation
- Stopping times for stochastic approximation procedures
- A stopping rule for the Robbins-Monro method
- Title not available (Why is that?)
- Bounded Length Confidence Intervals for the Zero of a Regression Function
- Optimization methods for large-scale machine learning
- Title not available (Why is that?)
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning
Cited In (5)
- Gradient descent in the absence of global Lipschitz continuity of the gradients
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- Robust optimization of control parameters for WEC arrays using stochastic methods
- Classical and fast parameters tuning in nearest neighbors with stop condition
- Gradient estimation for smooth stopping criteria
This page was built for publication: Stopping criteria for, and strong convergence of, stochastic gradient descent on Bottou-Curtis-Nocedal functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2089787)