Convergence Rates of Decentralized Gradient Methods over Cluster Networks
From MaRDI portal
Publication:6380261
arXiv2110.06992MaRDI QIDQ6380261FDOQ6380261
Authors: Amit Dutta, Nila Masrourisaadat, Thinh T. Doan
Publication date: 13 October 2021
Abstract: We present an analysis for the performance of decentralized consensus-based gradient (DCG) methods for solving optimization problems over a cluster network of nodes. This type of network is composed of a number of densely connected clusters with a sparse connection between them. Decentralized algorithms over cluster networks have been observed to constitute two-time-scale dynamics, where information within any cluster is mixed much faster than the one across clusters. Based on this observation, we present a novel analysis to study the convergence of the DCG methods over cluster networks. In particular, we show that these methods converge at a rate and only scale with the number of clusters, which is relatively small to the size of the network. Our result improves the existing analysis, where these methods are shown to scale with the size of the network. The key technique in our analysis is to consider a novel Lyapunov function that captures the impact of multiple time-scale dynamics on the convergence of this method. We also illustrate our theoretical results by a number of numerical simulations using DCG methods over different cluster networks.
This page was built for publication: Convergence Rates of Decentralized Gradient Methods over Cluster Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6380261)