Adaptive and optimal online linear regression on \(\ell^1\)-balls
From MaRDI portal
Publication:391734
DOI10.1016/j.tcs.2013.09.024zbMath1352.62108OpenAlexW2963812988MaRDI QIDQ391734
Jia Yuan Yu, Sébastien Gerchinovitz
Publication date: 13 January 2014
Published in: Theoretical Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.tcs.2013.09.024
Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05) Online algorithms; streaming algorithms (68W27)
Cites Work
- Unnamed Item
- Exponentiated gradient versus gradient descent for linear predictors
- The robustness of the \(p\)-norm algorithms
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Aggregating regression procedures to improve performance
- Adaptive and self-confident on-line learning algorithms
- Analysis of two gradient-based algorithms for on-line regression
- Improved second-order bounds for prediction with expert advice
- Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints
- Minimizing Regret With Label Efficient Prediction
- Sequential Procedures for Aggregating Arbitrary Estimators of a Conditional Mean
- Competitive On-line Statistics
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Learning Theory and Kernel Machines
- Prediction, Learning, and Games
- Gaussian model selection
- Relative loss bounds for on-line density estimation with the exponential family of distributions
This page was built for publication: Adaptive and optimal online linear regression on \(\ell^1\)-balls