Exponentiated gradient versus gradient descent for linear predictors
From MaRDI portal
Publication:675044
DOI10.1006/inco.1996.2612zbMath0872.68158WikidataQ100380108 ScholiaQ100380108MaRDI QIDQ675044
Manfred K. Warmuth, Jyrki Kivinen
Publication date: 19 October 1997
Published in: Information and Computation (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/4e77fb934237e164ec090617a66de381ef0661a0
68T05: Learning and adaptive systems in artificial intelligence
Related Items
The Concave-Convex Procedure, Competitive On-line Statistics, Multiplicative Updates for Nonnegative Quadratic Programming, Learning to Assign Degrees of Belief in Relational Domains, Online Ranking by Projecting, Neural learning by geometric integration of reduced `rigid-body' equations, Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming, A primal-dual perspective of online learning algorithms, Competing with wild prediction rules, Learning to assign degrees of belief in relational domains, The Perceptron algorithm versus Winnow: linear versus logarithmic mistake bounds when few input variables are relevant, A game of prediction with expert advice, Worst-case analysis of the Perceptron and Exponentiated Update algorithms, Efficient learning with virtual threshold gates, Cutting-plane training of structural SVMs, Bayesian generalized probability calculus for density matrices, Extracting certainty from uncertainty: regret bounded by variation in costs, Analysis of two gradient-based algorithms for on-line regression, Recursive aggregation of estimators by the mirror descent algorithm with averaging, Limited Stochastic Meta-Descent for Kernel-Based Online Learning, PORTFOLIO SELECTION AND ONLINE LEARNING, RECURSIVE FORECAST COMBINATION FOR DEPENDENT HETEROGENEOUS DATA