Kernelization of matrix updates, when and how?
DOI10.1016/J.TCS.2014.09.031zbMATH Open1360.68719OpenAlexW2174677872MaRDI QIDQ465262FDOQ465262
Authors: Manfred K. Warmuth, Wojciech Kotłowski, Shuisheng Zhou
Publication date: 31 October 2014
Published in: Theoretical Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.tcs.2014.09.031
Recommendations
kernelizationmultiplicative updatesrotational invarianceexponentiated gradient algorithmgradient descent algorithm
Learning and adaptive systems in artificial intelligence (68T05) Matrix exponential and similar functions of matrices (15A16)
Cites Work
- Some results on Tchebycheffian spline functions and stochastic processes
- The weighted majority algorithm
- Tracking the best linear predictor
- Title not available (Why is that?)
- A theory of the learnable
- A new approach to collaborative filtering: operator estimation with spectral regularization
- Online Variance Minimization
- When is there a representer theorem? Vector versus matrix regularizers
- Competitive On-line Statistics
- Relative loss bounds for on-line density estimation with the exponential family of distributions
- Title not available (Why is that?)
- Kernelization of matrix updates, when and how?
- Prototype Classification: Insights from Machine Learning
- Title not available (Why is that?)
- Learning Theory
- Title not available (Why is that?)
Cited In (2)
This page was built for publication: Kernelization of matrix updates, when and how?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q465262)