Nested Distributed Gradient Methods with Adaptive Quantized Communication

From MaRDI portal
Publication:6315901

arXiv1903.08149MaRDI QIDQ6315901FDOQ6315901


Authors: Albert S. Berahas, Ermin Wei Edit this on Wikidata


Publication date: 18 March 2019

Abstract: In this paper, we consider minimizing a sum of local convex objective functions in a distributed setting, where communication can be costly. We propose and analyze a class of nested distributed gradient methods with adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing multiple quantized communication steps on the rate of convergence and on the size of the neighborhood of convergence, and prove R-Linear convergence to the exact solution with increasing number of consensus steps and adaptive quantization. We test the performance of the method, as well as some practical variants, on quadratic functions, and show the effects of multiple quantized communication steps in terms of iterations/gradient evaluations, communication and cost.













This page was built for publication: Nested Distributed Gradient Methods with Adaptive Quantized Communication

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6315901)