A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization

From MaRDI portal
Publication:5084492

DOI10.1287/STSY.2021.0083zbMATH Open1489.90097arXiv1802.05155OpenAlexW3210625965MaRDI QIDQ5084492FDOQ5084492


Authors: Tianyi Liu, Zhehui Chen, Enlu Zhou, Tuo Zhao Edit this on Wikidata


Publication date: 24 June 2022

Published in: Stochastic Systems (Search for Journal in Brave)

Abstract: Momentum Stochastic Gradient Descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning, e.g., training deep neural networks, variational Bayesian inference, and etc. Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points, but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.


Full work available at URL: https://arxiv.org/abs/1802.05155




Recommendations




Cites Work


Cited In (15)

Uses Software





This page was built for publication: A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5084492)