Non-asymptotic analysis of Stochastic approximation algorithms for streaming data
From MaRDI portal
Publication:6133920
DOI10.1051/PS/2023006arXiv2109.07117OpenAlexW3199142985MaRDI QIDQ6133920FDOQ6133920
Authors: Antoine Godichon-Baggioni, Olivier Wintenberger
Publication date: 21 August 2023
Published in: ESAIM: Probability and Statistics (Search for Journal in Brave)
Abstract: We introduce a streaming framework for analyzing stochastic approximation/optimization problems. This streaming framework is analogous to solving optimization problems using time-varying mini-batches that arrive sequentially. We provide non-asymptotic convergence rates of various gradient-based algorithms; this includes the famous Stochastic Gradient (SG) descent (a.k.a. Robbins-Monro algorithm), mini-batch SG and time-varying mini-batch SG algorithms, as well as their iterated averages (a.k.a. Polyak-Ruppert averaging). We show i) how to accelerate convergence by choosing the learning rate according to the time-varying mini-batches, ii) that Polyak-Ruppert averaging achieves optimal convergence in terms of attaining the Cramer-Rao lower bound, and iii) how time-varying mini-batches together with Polyak-Ruppert averaging can provide variance reduction and accelerate convergence simultaneously, which is advantageous for many learning problems, such as online, sequential, and large-scale learning. We further demonstrate these favorable effects for various time-varying mini-batches.
Full work available at URL: https://arxiv.org/abs/2109.07117
Convex programming (90C25) Online algorithms; streaming algorithms (68W27) Sequential estimation (62L12) Stochastic approximation (62L20)
Cited In (2)
This page was built for publication: Non-asymptotic analysis of Stochastic approximation algorithms for streaming data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6133920)