A generalized online mirror descent with applications to classification and regression

From MaRDI portal
Publication:493737

DOI10.1007/S10994-014-5474-8zbMATH Open1359.62215DBLPjournals/ml/OrabonaCC15arXiv1304.2994OpenAlexW2129427957WikidataQ59538550 ScholiaQ59538550MaRDI QIDQ493737FDOQ493737


Authors: Francesco Orabona, Koby Crammer, Nicolò Cesa-Bianchi Edit this on Wikidata


Publication date: 4 September 2015

Published in: Machine Learning (Search for Journal in Brave)

Abstract: Online learning algorithms are fast, memory-efficient, easy to implement, and applicable to many prediction problems, including classification, regression, and ranking. Several online algorithms were proposed in the past few decades, some based on additive updates, like the Perceptron, and some on multiplicative updates, like Winnow. A unifying perspective on the design and the analysis of online algorithms is provided by online mirror descent, a general prediction strategy from which most first-order algorithms can be obtained as special cases. We generalize online mirror descent to time-varying regularizers with generic updates. Unlike standard mirror descent, our more general formulation also captures second order algorithms, algorithms for composite losses and algorithms for adaptive filtering. Moreover, we recover, and sometimes improve, known regret bounds as special cases of our analysis using specific regularizers. Finally, we show the power of our approach by deriving a new second order algorithm with a regret bound invariant with respect to arbitrary rescalings of individual features.


Full work available at URL: https://arxiv.org/abs/1304.2994




Recommendations



Cites Work


Cited In (16)

Uses Software





This page was built for publication: A generalized online mirror descent with applications to classification and regression

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q493737)