On the prediction loss of the Lasso in the partially labeled setting
DOI10.1214/18-EJS1457zbMath1408.62114arXiv1606.06179WikidataQ129087191 ScholiaQ129087191MaRDI QIDQ1616320
Arnak S. Dalalyan, Quentin Paris, Edwin Grappin, Pierre C. Bellec
Publication date: 1 November 2018
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1606.06179
sparsityhigh-dimensional regressionoracle inequalityLassotransductive learningsemi-supervised learningprediction risk
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (7)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Solution of linear ill-posed problems using overcomplete dictionaries
- The lower tail of random quadratic forms with applications to ordinary least squares
- Transductive versions of the Lasso and the Dantzig selector
- On the prediction performance of the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Exponential screening and optimal rates of sparse estimation
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- User-friendly tail bounds for sums of random matrices
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Regularization and the small-ball method. I: Sparse recovery
- On the conditions used to prove oracle results for the Lasso
- Slope meets Lasso: improved oracle bounds and optimality
- Pivotal estimation via square-root lasso in nonparametric regression
- Simultaneous analysis of Lasso and Dantzig selector
- Bounds of restricted isometry constants in extreme asymptotics: formulae for Gaussian matrices
- Reconstruction From Anisotropic Random Measurements
- Scaled sparse linear regression
- Accuracy Guarantees for <formula formulatype="inline"> <tex Notation="TeX">$\ell_1$</tex></formula>-Recovery
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Sparse estimation by exponential weighting
This page was built for publication: On the prediction loss of the Lasso in the partially labeled setting