Some sharp performance bounds for least squares regression with L₁ regularization
From MaRDI portal
Publication:834334
DOI10.1214/08-AOS659zbMATH Open1173.62029arXiv0908.2869OpenAlexW3104148512WikidataQ105584240 ScholiaQ105584240MaRDI QIDQ834334FDOQ834334
Authors: N. E. Zubov
Publication date: 19 August 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: We derive sharp performance bounds for least squares regression with regularization from parameter estimation accuracy and feature selection quality perspectives. The main result proved for regularization extends a similar result in [Ann. Statist. 35 (2007) 2313--2351] for the Dantzig selector. It gives an affirmative answer to an open question in [Ann. Statist. 35 (2007) 2358--2364]. Moreover, the result leads to an extended view of feature selection that allows less restrictive conditions than some recent work. Based on the theoretical insights, a novel two-stage -regularization procedure with selective penalization is analyzed. It is shown that if the target parameter vector can be decomposed as the sum of a sparse parameter vector with large coefficients and another less sparse vector with relatively small coefficients, then the two-stage procedure can lead to improved performance.
Full work available at URL: https://arxiv.org/abs/0908.2869
Recommendations
- A unified approach to model selection and sparse recovery using regularized least squares
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Least squares regression with \(l_1\)-regularizer in sum space
Cites Work
- Lasso-type recovery of sparse representations for high-dimensional data
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Title not available (Why is that?)
- A generalized Dantzig selector with shrinkage tuning
- Sparsity oracle inequalities for the Lasso
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Relaxed Lasso
- Decoding by Linear Programming
- Stable recovery of sparse overcomplete representations in the presence of noise
- Stable signal recovery from incomplete and inaccurate measurements
- Aggregation for Gaussian regression
- The Dantzig selector and sparsity oracle inequalities
- Sparsity in penalized empirical risk minimization
Cited In (58)
- Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso
- Variable Selection With Second-Generation P-Values
- Stab-GKnock: controlled variable selection for partially linear models using generalized knockoffs
- The benefit of group sparsity
- Two-stage convex relaxation approach to low-rank and sparsity regularized least squares loss
- On the conditions used to prove oracle results for the Lasso
- A survey of \(L_1\) regression
- A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery
- Multi-stage convex relaxation for feature selection
- Exponential screening and optimal rates of sparse estimation
- Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- Greedy variance estimation for the LASSO
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Thresholded spectral algorithms for sparse approximations
- Statistical consistency of coefficient-based conditional quantile regression
- Multistage convex relaxation approach to rank regularized minimization problems based on equivalent mathematical program with a generalized complementarity constraint
- DC approximation approach for \(\ell_0\)-minimization in compressed sensing
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- Near-ideal model selection by \(\ell _{1}\) minimization
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Mirror averaging with sparsity priors
- A general double-proximal gradient algorithm for d.c. programming
- An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- A hybrid Bregman alternating direction method of multipliers for the linearly constrained difference-of-convex problems
- DC approximation approaches for sparse optimization
- Which bridge estimator is the best for variable selection?
- Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression
- Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems
- \(l_{0}\)-norm based structural sparse least square regression for feature selection
- A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization
- Hypothesis testing in large-scale functional linear regression
- Asymptotic normality and optimalities in estimation of large Gaussian graphical models
- Sparse trace norm regularization
- Self-concordant analysis for logistic regression
- Selection Consistency of Generalized Information Criterion for Sparse Logistic Model
- Strong oracle optimality of folded concave penalized estimation
- Sparse recovery under matrix uncertainty
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- General nonexact oracle inequalities for classes with a subexponential envelope
- \(\ell_{1}\)-penalization for mixture regression models
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors
- Structured sparsity through convex optimization
- Robust dequantized compressive sensing
- Consistent parameter estimation for Lasso and approximate message passing
- Title not available (Why is that?)
- The coefficient regularized regression with random projection
- An efficient adaptive forward-backward selection method for sparse polynomial chaos expansion
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- A selective review of group selection in high-dimensional models
- A simple homotopy proximal mapping algorithm for compressive sensing
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Tuning parameter selection for the adaptive LASSO in the autoregressive model
- Least squares regression with \(l_1\)-regularizer in sum space
- High-Dimensional Gaussian Graphical Regression Models with Covariates
This page was built for publication: Some sharp performance bounds for least squares regression with \(L_1\) regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q834334)