The sample complexity of learning linear predictors with the squared loss
From MaRDI portal
Publication:2788420
zbMATH Open1351.68233arXiv1406.5143MaRDI QIDQ2788420FDOQ2788420
Publication date: 19 February 2016
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1406.5143
Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05)
Cited In (8)
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Finite sample performance of linear least squares estimation
- Distribution-free robust linear regression
- Title not available (Why is that?)
- Characterizing the sample complexity of private learners
- Worst-case bounds for the logarithmic loss of predictors
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Sample Complexity of Classifiers Taking Values in ℝQ, Application to Multi-Class SVMs
This page was built for publication: The sample complexity of learning linear predictors with the squared loss
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2788420)