Convergence of online mirror descent

From MaRDI portal
Publication:2278461

DOI10.1016/J.ACHA.2018.05.005zbMATH Open1494.68219arXiv1802.06357OpenAlexW2803423166WikidataQ129764098 ScholiaQ129764098MaRDI QIDQ2278461FDOQ2278461


Authors: Yunwen Lei, Ding-Xuan Zhou Edit this on Wikidata


Publication date: 5 December 2019

Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)

Abstract: In this paper we consider online mirror descent (OMD) algorithms, a class of scalable online learning algorithms exploiting data geometric structures through mirror maps. Necessary and sufficient conditions are presented in terms of the step size sequence etatt for the convergence of an OMD algorithm with respect to the expected Bregman distance induced by the mirror map. The condition is limtoinftyetat=0,sumt=1inftyetat=infty in the case of positive variances. It is reduced to sumt=1inftyetat=infty in the case of zero variances for which the linear convergence may be achieved by taking a constant step size sequence. A sufficient condition on the almost sure convergence is also given. We establish tight error bounds under mild conditions on the mirror map, the loss function, and the regularizer. Our results are achieved by some novel analysis on the one-step progress of the OMD algorithm using smoothness and strong convexity of the mirror map and the loss function.


Full work available at URL: https://arxiv.org/abs/1802.06357




Recommendations




Cites Work


Cited In (11)

Uses Software





This page was built for publication: Convergence of online mirror descent

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2278461)