Second-order Properties of Noisy Distributed Gradient Descent

From MaRDI portal
Publication:6431379

arXiv2303.17165MaRDI QIDQ6431379FDOQ6431379


Authors: Lei Qin, Michael Cantoni, Ye Pu Edit this on Wikidata


Publication date: 30 March 2023

Abstract: We study a fixed step-size noisy distributed gradient descent algorithm for solving optimization problems in which the objective is a finite sum of smooth but possibly non-convex functions. Random perturbations are introduced to the gradient descent directions at each step to actively evade saddle points. Under certain regularity conditions, and with a suitable step-size, it is established that each agent converges to a neighborhood of a local minimizer and the size of the neighborhood depends on the step-size and the confidence parameter. A numerical example is presented to illustrate the effectiveness of the random perturbations in terms of escaping saddle points in fewer iterations than without the perturbations.













This page was built for publication: Second-order Properties of Noisy Distributed Gradient Descent

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6431379)