Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
From MaRDI portal
Publication:2325349
DOI10.3150/18-BEJ1078zbMath1428.62328arXiv1705.10696MaRDI QIDQ2325349
Publication date: 25 September 2019
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1705.10696
Nonparametric regression and quantile regression (62G08) Density estimation (62G07) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05)
Related Items (2)
ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels ⋮ On the robustness of minimum norm interpolators and regularized empirical risk minimizers
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A new perspective on least squares under convex constraint
- Optimal exponential bounds for aggregation of density estimators
- On the prediction performance of the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Exponential screening and optimal rates of sparse estimation
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Oracle inequalities and optimal inference under group sparsity
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Deviation optimal learning using greedy \(Q\)-aggregation
- Generalized mirror averaging and \(D\)-convex aggregation
- Linear and convex aggregation of density estimators
- Near-ideal model selection by \(\ell _{1}\) minimization
- Aggregation via empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Learning by mirror averaging
- Fast learning rates for plug-in classifiers
- On the prediction loss of the Lasso in the partially labeled setting
- Bounds on the prediction error of penalized least squares estimators with convex penalty
- Optimal bounds for aggregation of affine estimators
- Sharp oracle inequalities for least squares estimators in shape restricted regression
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Mixing strategies for density estimation.
- Asymptotics for Lasso-type estimators.
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- Slope meets Lasso: improved oracle bounds and optimality
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Simultaneous analysis of Lasso and Dantzig selector
- Empirical risk minimization is optimal for the convex aggregation problem
- Optimal learning with \textit{Q}-aggregation
- Gaussian averages of interpolated bodies and applications to approximate reconstruction
- Empirical minimization
- Local Rademacher complexities
- Learning without Concentration
- The Generalized Lasso With Non-Linear Observations
- Reconstruction From Anisotropic Random Measurements
- Scaled sparse linear regression
- Information Theory and Mixing Least-Squares Regressions
- High-dimensional estimation with geometric constraints: Table 1.
- Learning Theory and Kernel Machines
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Introduction to nonparametric estimation
- Sparse estimation by exponential weighting
This page was built for publication: Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation