Fast Convergence Rates for Distributed Non-Bayesian Learning
From MaRDI portal
Publication:4566974
DOI10.1109/TAC.2017.2690401zbMATH Open1458.62116arXiv1508.05161OpenAlexW2963118811MaRDI QIDQ4566974FDOQ4566974
Authors: Angelia Nedić, Alex Olshevsky, César A. Uribe
Publication date: 27 June 2018
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a non-asymptotic, explicit and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.
Full work available at URL: https://arxiv.org/abs/1508.05161
Bayesian inference (62F15) Estimation in multivariate analysis (62H12) Distributed algorithms (68W15) Stochastic learning and adaptive control (93E35)
Cited In (17)
- Fast learning rates in statistical inference through aggregation
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Distributed stochastic gradient tracking methods
- Optimal gradient tracking for decentralized optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Personalized optimization with user's feedback
- Distributed event-triggered unadjusted Langevin algorithm for Bayesian learning
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Differentially private distributed online learning over time‐varying digraphs via dual averaging
- A dual approach for optimal algorithms in distributed optimization over networks
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Graph-theoretic approaches for analyzing the resilience of distributed control systems: a tutorial and survey
- Min-max optimization over slowly time-varying graphs
- Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds
- Towards accelerated rates for distributed optimization over time-varying networks
- Distributed Bayesian filtering using logarithmic opinion pool for dynamic sensor networks
This page was built for publication: Fast Convergence Rates for Distributed Non-Bayesian Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4566974)