Nonasymptotic estimates for stochastic gradient Langevin dynamics under local conditions in nonconvex optimization
DOI10.1007/s00245-022-09932-6OpenAlexW4316041201MaRDI QIDQ2682367
Sotirios Sabanis, Ying Zhang, Theodoros Damoulas, Ömer Deniz Akyildiz
Publication date: 31 January 2023
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.02008
non-convex optimizationvariational inferencestochastic gradient Langevin dynamicslocal Lipschitz continuouslocal dissipativitynon-asymptotic estimates
Computational methods in Markov chains (60J22) Sampling theory, sample surveys (62D05) Monte Carlo methods (65C05) Nonconvex programming, global optimization (90C26) Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Numerical analysis or methods applied to Markov chains (65C40)
Related Items (5)
Cites Work
- Reflection couplings and contraction rates for diffusions
- Optimal portfolio selection and dynamic benchmark tracking
- Fixed-form variational posterior approximation through stochastic linear regression
- Laplace's method revisited: Weak convergence of probability measures
- On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case
- The tamed unadjusted Langevin algorithm
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Higher order Langevin Monte Carlo algorithm
- Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
- Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise.
- Graphical Models, Exponential Families, and Variational Inference
- A useful theorem for nonlinear devices having Gaussian inputs
- Quantitative Harris-type theorems for diffusions and McKean–Vlasov processes
- On fixed gain recursive estimators with discontinuity in the parameters
- On Stochastic Gradient Langevin Dynamics with Dependent Data Streams: The Fully Nonconvex Case
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
This page was built for publication: Nonasymptotic estimates for stochastic gradient Langevin dynamics under local conditions in nonconvex optimization