Constrained ensemble Langevin Monte Carlo

From MaRDI portal
Publication:2148951

DOI10.3934/FODS.2021034zbMATH Open1489.65008arXiv2102.04279OpenAlexW4205796992MaRDI QIDQ2148951FDOQ2148951

Qin Li, Zhiyan Ding

Publication date: 24 June 2022

Published in: Foundations of Data Science (Search for Journal in Brave)

Abstract: The classical Langevin Monte Carlo method looks for samples from a target distribution by descending the samples along the gradient of the target distribution. The method enjoys a fast convergence rate. However, the numerical cost is sometimes high because each iteration requires the computation of a gradient. One approach to eliminate the gradient computation is to employ the concept of ``ensemble." A large number of particles are evolved together so the neighboring particles provide gradient information to each other. In this article, we discuss two algorithms that integrate the ensemble feature into LMC and the associated properties. In particular, we find that if one directly surrogates the gradient using the ensemble approximation, the algorithm, termed Ensemble Langevin Monte Carlo, is unstable due to a high variance term. If the gradients are replaced by the ensemble approximations only in a constrained manner, to protect from the unstable points, the algorithm, termed Constrained Ensemble Langevin Monte Carlo, resembles the classical LMC up to an ensemble error but removes most of the gradient computation.


Full work available at URL: https://arxiv.org/abs/2102.04279




Recommendations




Cites Work


Cited In (3)

Uses Software





This page was built for publication: Constrained ensemble Langevin Monte Carlo

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2148951)