A generalized online mirror descent with applications to classification and regression
DOI10.1007/S10994-014-5474-8zbMATH Open1359.62215DBLPjournals/ml/OrabonaCC15arXiv1304.2994OpenAlexW2129427957WikidataQ59538550 ScholiaQ59538550MaRDI QIDQ493737FDOQ493737
Authors: Francesco Orabona, Koby Crammer, Nicolò Cesa-Bianchi
Publication date: 4 September 2015
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1304.2994
Recommendations
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Measures of association (correlation, canonical correlation, etc.) (62H20) Online algorithms; streaming algorithms (68W27)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Prediction, Learning, and Games
- Convex analysis and monotone operator theory in Hilbert spaces
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Title not available (Why is that?)
- Logarithmic regret algorithms for online convex optimization
- Dual averaging methods for regularized stochastic learning and online optimization
- Online learning and online convex optimization
- Interior-Point Methods for Full-Information and Bandit Online Learning
- Prediction by Categorical Features: Generalization Properties and Application to Feature Ranking
- Efficient online and batch learning using forward backward splitting
- Title not available (Why is that?)
- Adaptive regularization of weight vectors
- A Second-Order Perceptron Algorithm
- Regularization techniques for learning with matrices
- Confidence-weighted linear classification for text categorization
- Exponentiated gradient versus gradient descent for linear predictors
- The robustness of the \(p\)-norm algorithms
- Adaptive and self-confident on-line learning algorithms
- Competitive On-line Statistics
- Relative loss bounds for on-line density estimation with the exponential family of distributions
- The p-norm generalization of the LMS algorithm for adaptive filtering
- Relative loss bounds for multidimensional regression problems
- Title not available (Why is that?)
- A primal-dual perspective of online learning algorithms
- Some properties of hyperstructure and union normal fuzzy subgroups.
Cited In (16)
- Scale-free online learning
- Title not available (Why is that?)
- Convergence of online mirror descent
- Analysis of Online Composite Mirror Descent Algorithm
- Data-Driven Mirror Descent with Input-Convex Neural Networks
- Second-order non-stationary online learning for regression
- Analogues of switching subgradient schemes for relatively Lipschitz-continuous convex programming problems
- Title not available (Why is that?)
- A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds
- Scale-invariant unconstrained online learning
- Optimistic optimisation of composite objective with exponentiated update
- A primal-dual perspective of online learning algorithms
- Scale-free algorithms for online linear optimization
- A continuous-time approach to online optimization
- Unifying mirror descent and dual averaging
- A survey of algorithms and analysis for adaptive online learning
Uses Software
This page was built for publication: A generalized online mirror descent with applications to classification and regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q493737)