Distributed event-triggered unadjusted Langevin algorithm for Bayesian learning
From MaRDI portal
Publication:6136164
Recommendations
- Distributed Bayesian learning with stochastic natural gradient expectation propagation and the posterior server
- Distributed Bayesian machine learning procedures
- Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo
- Distributed Variational Bayesian Algorithms Over Sensor Networks
- Hybrid deterministic-stochastic gradient Langevin dynamics for Bayesian learning
- scientific article; zbMATH DE number 7365721
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Stochastic Event-triggered Variational Bayesian Filtering
Cites work
- scientific article; zbMATH DE number 7626754 (Why is no real title available?)
- An introduction to MCMC for machine learning
- Convergence of Langevin MCMC in KL-divergence
- Distributed Event-Triggered Control for Multi-Agent Systems
- Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
- Distributed linear parameter estimation: asymptotically efficient adaptive strategies
- Dynamic Triggering Mechanisms for Event-Triggered Control
- Exponential convergence of Langevin distributions and their discrete approximations
- Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality
- Generalized inverse of the Laplacian matrix and some applications
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Improved bounds for discretization of Langevin diffusions: near-optimal rates without convexity
- Logarithmic Sobolev Inequalities
- Nonasymptotic bounds for sampling algorithms without log-concavity
- Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
- Nonasymptotic estimates for stochastic gradient Langevin dynamics under local conditions in nonconvex optimization
- On Stochastic Gradient Langevin Dynamics with Dependent Data Streams: The Fully Nonconvex Case
- SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Optimization
- Sampling can be faster than optimization
- Simulating Hamiltonian Dynamics
- The Variational Formulation of the Fokker--Planck Equation
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence
Cited in
(4)
This page was built for publication: Distributed event-triggered unadjusted Langevin algorithm for Bayesian learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136164)