The Randomized Midpoint Method for Log-Concave Sampling

From MaRDI portal
Publication:6325156

arXiv1909.05503MaRDI QIDQ6325156FDOQ6325156


Authors: Ruoqi Shen, Yin Tat Lee Edit this on Wikidata


Publication date: 12 September 2019

Abstract: Sampling from log-concave distributions is a well researched problem that has many applications in statistics and machine learning. We study the distributions of the form p*proptoexp(f(x)), where f:mathbbRdightarrowmathbbR has an L-Lipschitz gradient and is m-strongly convex. In our paper, we propose a Markov chain Monte Carlo (MCMC) algorithm based on the underdamped Langevin diffusion (ULD). It can achieve epsiloncdotD error (in 2-Wasserstein distance) in ildeOleft(kappa7/6/epsilon1/3+kappa/epsilon2/3ight) steps, where Doversetmathrmdef=sqrtfracdm is the effective diameter of the problem and kappaoversetmathrmdef=fracLm is the condition number. Our algorithm performs significantly faster than the previously best known algorithm for solving this problem, which requires ildeOleft(kappa1.5/epsilonight) steps. Moreover, our algorithm can be easily parallelized to require only O(kappalogfrac1epsilon) parallel steps. To solve the sampling problem, we propose a new framework to discretize stochastic differential equations. We apply this framework to discretize and simulate ULD, which converges to the target distribution p*. The framework can be used to solve not only the log-concave sampling problem, but any problem that involves simulating (stochastic) differential equations.













This page was built for publication: The Randomized Midpoint Method for Log-Concave Sampling

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6325156)