Accelerating Nonconvex Learning via Replica Exchange Langevin Diffusion

From MaRDI portal
Publication:6344402

arXiv2007.01990MaRDI QIDQ6344402FDOQ6344402


Authors: Yi Chen, Jing Lin Chen, Jing Dong, Jian Peng, Zhaoran Wang Edit this on Wikidata


Publication date: 3 July 2020

Abstract: Langevin diffusion is a powerful method for nonconvex optimization, which enables the escape from local minima by injecting noise into the gradient. In particular, the temperature parameter controlling the noise level gives rise to a tradeoff between ``global exploration and ``local exploitation, which correspond to high and low temperatures. To attain the advantages of both regimes, we propose to use replica exchange, which swaps between two Langevin diffusions with different temperatures. We theoretically analyze the acceleration effect of replica exchange from two perspectives: (i) the convergence in chi^2-divergence, and (ii) the large deviation principle. Such an acceleration effect allows us to faster approach the global minima. Furthermore, by discretizing the replica exchange Langevin diffusion, we obtain a discrete-time algorithm. For such an algorithm, we quantify its discretization error in theory and demonstrate its acceleration effect in practice.













This page was built for publication: Accelerating Nonconvex Learning via Replica Exchange Langevin Diffusion

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6344402)