Stochastic gradient Hamiltonian Monte Carlo for non-convex learning
From MaRDI portal
Publication:2137760
DOI10.1016/j.spa.2022.04.001zbMath1495.65004arXiv1903.10328OpenAlexW3003439716WikidataQ114130746 ScholiaQ114130746MaRDI QIDQ2137760
Publication date: 16 May 2022
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.10328
Monte Carlo methods (65C05) Computational methods for stochastic equations (aspects of stochastic analysis) (60H35) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Weighted Csiszár-Kullback-Pinsker inequalities and applications to transportation inequalities
- Asymptotics of the spectral gap with applications to the theory of simulated annealing
- Nonstationary Markov chains and convergence of the annealing algorithm
- Laplace's method revisited: Weak convergence of probability measures
- On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Couplings and quantitative contraction rates for Langevin dynamics
- Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
- The geometric foundations of Hamiltonian Monte Carlo
- Nonasymptotic bounds for sampling algorithms without log-concavity
- The Rate of Convergence of Nesterov's Accelerated Forward-Backward Method is Actually Faster Than $1/k^2$
- Wasserstein Continuity of Entropy and Outer Bounds for Interference Channels
- Speeding up MCMC by Delayed Acceptance and Data Subsampling
- Diffusion for Global Optimization in $\mathbb{R}^n $
- Recursive Stochastic Algorithms for Global Optimization in $\mathbb{R}^d $
- Metropolis-Type Annealing Algorithms for Global Optimization in $\mathbb{R}^d $
- On fixed gain recursive estimators with discontinuity in the parameters
- On Stochastic Gradient Langevin Dynamics with Dependent Data Streams: The Fully Nonconvex Case
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
This page was built for publication: Stochastic gradient Hamiltonian Monte Carlo for non-convex learning