Some sharp performance bounds for least squares regression with \(L_1\) regularization

From MaRDI portal
Publication:834334


DOI10.1214/08-AOS659zbMath1173.62029arXiv0908.2869WikidataQ105584240 ScholiaQ105584240MaRDI QIDQ834334

N. E. Zubov

Publication date: 19 August 2009

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0908.2869


62G08: Nonparametric regression and quantile regression

62J05: Linear regression; mixed models


Related Items

Unnamed Item, Thresholded spectral algorithms for sparse approximations, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, Consistent parameter estimation for Lasso and approximate message passing, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Structured sparsity through convex optimization, A selective review of group selection in high-dimensional models, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems, Statistical consistency of coefficient-based conditional quantile regression, DC approximation approaches for sparse optimization, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Mirror averaging with sparsity priors, General nonexact oracle inequalities for classes with a subexponential envelope, A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, Tuning parameter selection for the adaptive LASSO in the autoregressive model, Exponential screening and optimal rates of sparse estimation, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, Sparse recovery under matrix uncertainty, \(\ell_{1}\)-penalization for mixture regression models, Two-stage convex relaxation approach to low-rank and sparsity regularized least squares loss, Near-ideal model selection by \(\ell _{1}\) minimization, The benefit of group sparsity, \(l_{0}\)-norm based structural sparse least square regression for feature selection, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, On the conditions used to prove oracle results for the Lasso, Self-concordant analysis for logistic regression, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), An efficient adaptive forward-backward selection method for sparse polynomial chaos expansion, Robust dequantized compressive sensing, Sparse trace norm regularization, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, A hybrid Bregman alternating direction method of multipliers for the linearly constrained difference-of-convex problems, A general double-proximal gradient algorithm for d.c. programming, A simple homotopy proximal mapping algorithm for compressive sensing, Multi-stage convex relaxation for feature selection, Strong oracle optimality of folded concave penalized estimation, DC Approximation Approach for ℓ0-minimization in Compressed Sensing, Selection Consistency of Generalized Information Criterion for Sparse Logistic Model, THE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTION



Cites Work