On Complexity Issues of Online Learning Algorithms
From MaRDI portal
Publication:5281195
DOI10.1109/TIT.2010.2079010zbMATH Open1368.68288OpenAlexW2118285149MaRDI QIDQ5281195FDOQ5281195
Authors: Yuan Yao
Publication date: 27 July 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tit.2010.2079010
Learning and adaptive systems in artificial intelligence (68T05) Online algorithms; streaming algorithms (68W27) Stochastic approximation (62L20)
Cited In (19)
- Logistic classification with varying gaussians
- Online learning for quantile regression and support vector regression
- Convergence of online mirror descent
- Fast and strong convergence of online learning algorithms
- Derivative reproducing properties for kernel methods in learning theory
- Approximation analysis of learning algorithms for support vector regression and quantile regression
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Convergence of unregularized online learning algorithms
- Online Pairwise Learning Algorithms
- Can entropy characterize performance of online algorithms?
- Parameter learning algorithm for the online data acknowledgment problem
- Conditional quantiles with varying Gaussians
- Optimality of robust online learning
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- ERM scheme for quantile regression
- On-line learning and the metrical task system problem
- Learning with varying insensitive loss
- On-line learning in parity machines
- On grouping effect of elastic net
This page was built for publication: On Complexity Issues of Online Learning Algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5281195)