Extending the Step-Size Restriction for Gradient Descent to Avoid Strict Saddle Points

From MaRDI portal
Publication:5027014

DOI10.1137/19M129509XzbMATH Open1484.90088arXiv1908.01753OpenAlexW3111569753MaRDI QIDQ5027014FDOQ5027014


Authors: Hayden Schaeffer, S. G. McCalla Edit this on Wikidata


Publication date: 3 February 2022

Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)

Abstract: We provide larger step-size restrictions for which gradient descent based algorithms (almost surely) avoid strict saddle points. In particular, consider a twice differentiable (non-convex) objective function whose gradient has Lipschitz constant L and whose Hessian is well-behaved. We prove that the probability of initial conditions for gradient descent with step-size up to 2/L converging to a strict saddle point, given one uniformly random initialization, is zero. This extends previous results up to the sharp limit imposed by the convex case. In addition, the arguments hold in the case when a learning rate schedule is given, with either a continuous decaying rate or a piece-wise constant schedule.


Full work available at URL: https://arxiv.org/abs/1908.01753




Recommendations




Cites Work


Cited In (5)





This page was built for publication: Extending the Step-Size Restriction for Gradient Descent to Avoid Strict Saddle Points

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5027014)