Fast Convergence Rates for Distributed Non-Bayesian Learning

From MaRDI portal
Publication:4566974

DOI10.1109/TAC.2017.2690401zbMATH Open1458.62116arXiv1508.05161OpenAlexW2963118811MaRDI QIDQ4566974FDOQ4566974


Authors: Angelia Nedić, Alex Olshevsky, César A. Uribe Edit this on Wikidata


Publication date: 27 June 2018

Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)

Abstract: We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a non-asymptotic, explicit and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.


Full work available at URL: https://arxiv.org/abs/1508.05161







Cited In (17)





This page was built for publication: Fast Convergence Rates for Distributed Non-Bayesian Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4566974)