Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo

From MaRDI portal
Publication:6134345

DOI10.1007/S10994-023-06345-6arXiv2107.07211OpenAlexW3180796377MaRDI QIDQ6134345FDOQ6134345


Authors: Vyacheslav Kungurtsev, Adam D. Cobb, Tara Javidi, Brian A. Jalaian Edit this on Wikidata


Publication date: 22 August 2023

Published in: Machine Learning (Search for Journal in Brave)

Abstract: Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices. Bayesian approaches to learning benefit from offering more information as to the uncertainty of a random quantity, and Langevin and Hamiltonian methods are effective at realizing sampling from an uncertain distribution with large parameter dimensions. Such methods have only recently appeared in the decentralized setting, and either exclusively use stochastic gradient Langevin and Hamiltonian Monte Carlo approaches that require a diminishing stepsize to asymptotically sample from the posterior and are known in practice to characterize uncertainty less faithfully than constant step-size methods with a Metropolis adjustment, or assume strong convexity properties of the potential function. We present the first approach to incorporating constant stepsize Metropolis-adjusted HMC in the decentralized sampling framework, show theoretical guarantees for consensus and probability distance to the posterior stationary distribution, and demonstrate their effectiveness numerically on standard real world problems, including decentralized learning of neural networks which is known to be highly non-convex.


Full work available at URL: https://arxiv.org/abs/2107.07211







Cites Work


Cited In (1)





This page was built for publication: Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6134345)