Stochastic Sign Descent Methods: New Algorithms and Better Theory

From MaRDI portal
Publication:6319645

arXiv1905.12938MaRDI QIDQ6319645FDOQ6319645


Authors: Mher Safaryan, Peter Richtárik Edit this on Wikidata


Publication date: 30 May 2019

Abstract: Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD, have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we analyze sign-based methods for non-convex optimization in three key settings: (i) standard single node, (ii) parallel with shared data and (iii) distributed with partitioned data. For single machine case, we generalize the previous analysis of signSGD relying on intuitive bounds on success probabilities and allowing even biased estimators. Furthermore, we extend the analysis to parallel setting within a parameter server framework, where exponentially fast noise reduction is guaranteed with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes. Next, we identify a fundamental issue with signSGD to converge in distributed environment. To resolve this issue, we propose a new sign-based method, {em Stochastic Sign Descent with Momentum (SSDM)}, which converges under standard bounded variance assumption with the optimal asymptotic rate. We validate several aspects of our theoretical findings with numerical experiments.













This page was built for publication: Stochastic Sign Descent Methods: New Algorithms and Better Theory

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6319645)