Optimization and Analysis of Distributed Averaging With Short Node Memory
From MaRDI portal
Publication:4570271
DOI10.1109/TSP.2010.2043127zbMATH Open1392.94371arXiv0903.3537MaRDI QIDQ4570271FDOQ4570271
Authors: Boris N. Oreshkin, Mark J. Coates, Michael G. Rabbat
Publication date: 9 July 2018
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Abstract: In this paper, we demonstrate, both theoretically and by numerical examples, that adding a local prediction component to the update rule can significantly improve the convergence rate of distributed averaging algorithms. We focus on the case where the local predictor is a linear combination of the node's two previous values (i.e., two memory taps), and our update rule computes a combination of the predictor and the usual weighted linear combination of values received from neighbouring nodes. We derive the optimal mixing parameter for combining the predictor with the neighbors' values, and carry out a theoretical analysis of the improvement in convergence rate that can be obtained using this acceleration methodology. For a chain topology on n nodes, this leads to a factor of n improvement over the one-step algorithm, and for a two-dimensional grid, our approach achieves a factor of n^1/2 improvement, in terms of the number of iterations required to reach a prescribed level of accuracy.
Full work available at URL: https://arxiv.org/abs/0903.3537
Decision theory (91B06) Social choice (91B14) Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Deterministic network models in operations research (90B10)
Cited In (12)
- An accelerated distributed gradient method with local memory
- Distributed algebraic connectivity estimation for undirected graphs with upper and lower bounds
- Fast consensus algorithm of multi-agent systems with double gains regulation
- Characterizing limits and opportunities in speeding up Markov chain mixing
- Fast distributed algebraic connectivity estimation in large scale networks
- A dual approach for optimal algorithms in distributed optimization over networks
- Optimal tradeoff between instantaneous and delayed neighbor information in consensus algorithms
- A new class of consensus protocols for agent networks with discrete time dynamics
- Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions
- Distributed adaptive control of linear multi-agent systems with event-triggered communications
- Fast convergent average consensus of multiagent systems based on community detection algorithm
- Accelerated consensus to accurate average in multi-agent networks via state prediction
This page was built for publication: Optimization and Analysis of Distributed Averaging With Short Node Memory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4570271)