Exponentiated gradient versus gradient descent for linear predictors
From MaRDI portal
Publication:675044
DOI10.1006/INCO.1996.2612zbMATH Open0872.68158OpenAlexW2069317438WikidataQ100380108 ScholiaQ100380108MaRDI QIDQ675044FDOQ675044
Authors: Jyrki Kivinen, Manfred K. Warmuth
Publication date: 19 October 1997
Published in: Information and Computation (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/4e77fb934237e164ec090617a66de381ef0661a0
Recommendations
Cited In (64)
- Regret analysis of an online majorized semi-proximal ADMM for online composite optimization
- Nonstationary online convex optimization with multiple predictions
- Statistical computational learning
- Learning to Assign Degrees of Belief in Relational Domains
- Distributed online bandit linear regressions with differential privacy
- A family of large margin linear classifiers and its application in dynamic environments
- Efficient algorithms for implementing incremental proximal-point methods
- Optimistic optimisation of composite objective with exponentiated update
- Scale-free online learning
- Adaptive and optimal online linear regression on \(\ell^1\)-balls
- Extracting certainty from uncertainty: regret bounded by variation in costs
- Improved algorithms for online load balancing
- Learning Theory
- An entropic Landweber method for linear ill-posed problems
- A kernel-based perceptron with dynamic memory
- Online variance minimization
- Multiplicative Updates for Nonnegative Quadratic Programming
- A game of prediction with expert advice
- Robust and sparse regression in generalized linear model by stochastic optimization
- Online decision making with high-dimensional covariates
- Relative utility bounds for empirically optimal portfolios
- Online learning based on online DCA and application to online classification
- The Perceptron algorithm versus Winnow: linear versus logarithmic mistake bounds when few input variables are relevant
- Adaptive regularization of weight vectors
- Analysis of two gradient-based algorithms for on-line regression
- Prior knowledge and preferential structures in gradient descent learning algorithms
- Constrained dual graph regularized orthogonal nonnegative matrix tri-factorization for co-clustering
- The entropic barrier: exponential families, log-concave geometry, and self-concordance
- Competing with wild prediction rules
- Learning to assign degrees of belief in relational domains
- Weighted last-step min-max algorithm with improved sub-logarithmic regret
- Testing for association in multiview network data
- Cutting-plane training of structural SVMs
- Online learning of Nash equilibria in congestion games
- Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming
- Bayesian generalized probability calculus for density matrices
- Worst-case analysis of the Perceptron and Exponentiated Update algorithms
- Regrets of proximal method of multipliers for online non-convex optimization with long term constraints
- On the convergence of mirror descent beyond stochastic convex programming
- The Concave-Convex Procedure
- Online Ranking by Projecting
- Efficient learning with virtual threshold gates
- Recursive forecast combination for dependent heterogeneous data
- Randomized linear programming solves the Markov decision problem in nearly linear (sometimes sublinear) time
- Neural learning by geometric integration of reduced `rigid-body' equations
- Convergence rates of gradient methods for convex optimization in the space of measures
- An efficient approach to solve the large-scale semidefinite programming problems
- A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds
- A generalized online mirror descent with applications to classification and regression
- Learning rotations with little regret
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Foraging theory for dimensionality reduction of clustered data
- Dynamical memory control based on projection technique for online regression
- PAC-Bayesian risk bounds for group-analysis sparse regression by exponential weighting
- Out-of-sample utility bounds for empirically optimal portfolios in a single-period investment problem
- PORTFOLIO SELECTION AND ONLINE LEARNING
- Achieving fairness with a simple ridge penalty
- A primal-dual perspective of online learning algorithms
- Competitive On-line Statistics
- A quasi-Bayesian perspective to online clustering
- Convergence of the exponentiated gradient method with Armijo line search
- A continuous-time approach to online optimization
- Limited Stochastic Meta-Descent for Kernel-Based Online Learning
- Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks
This page was built for publication: Exponentiated gradient versus gradient descent for linear predictors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q675044)