L-SVRG and L-Katyusha with Arbitrary Sampling
From MaRDI portal
Publication:6319946
arXiv1906.01481MaRDI QIDQ6319946FDOQ6319946
Authors: Xun Qian, Zheng Qu, Peter Richtárik
Publication date: 4 June 2019
Abstract: We develop and analyze a new family of {em nonaccelerated and accelerated loopless variance-reduced methods} for finite sum optimization problems. Our convergence analysis relies on a novel expected smoothness condition which upper bounds the variance of the stochastic gradient estimation by a constant times a distance-like function. This allows us to handle with ease {em arbitrary sampling schemes} as well as the nonconvex case. We perform an in-depth estimation of these expected smoothness parameters and propose new importance samplings which allow {em linear speedup} when the expected minibatch size is in a certain range. Furthermore, a connection between these expected smoothness parameters and expected separable overapproximation (ESO) is established, which allows us to exploit data sparsity as well. Our results recover as special cases the recently proposed loopless SVRG and loopless Katyusha.
This page was built for publication: L-SVRG and L-Katyusha with Arbitrary Sampling
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6319946)