Dimension-free convergence rates for gradient Langevin dynamics in RKHS

From MaRDI portal
Publication:6335877

arXiv2003.00306MaRDI QIDQ6335877FDOQ6335877

Boris Muzellec, Taiji Suzuki, Mathurin Massias, Kanji Sato

Publication date: 29 February 2020

Abstract: Gradient Langevin dynamics (GLD) and stochastic GLD (SGLD) have attracted considerable attention lately, as a way to provide convergence guarantees in a non-convex setting. However, the known rates grow exponentially with the dimension of the space. In this work, we provide a convergence analysis of GLD and SGLD when the optimization space is an infinite dimensional Hilbert space. More precisely, we derive non-asymptotic, dimension-free convergence rates for GLD/SGLD when performing regularized non-convex optimization in a reproducing kernel Hilbert space. Amongst others, the convergence analysis relies on the properties of a stochastic differential equation, its discrete time Galerkin approximation and the geometric ergodicity of the associated Markov chains.












This page was built for publication: Dimension-free convergence rates for gradient Langevin dynamics in RKHS

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6335877)