Lasso-type recovery of sparse representations for high-dimensional data
From MaRDI portal
Publication:1002157
Abstract: The Lasso is an attractive technique for regularization and variable selection for high-dimensional data, where the number of predictor variables is potentially much larger than the number of samples . However, it was recently discovered that the sparsity pattern of the Lasso estimator can only be asymptotically identical to the true sparsity pattern if the design matrix satisfies the so-called irrepresentable condition. The latter condition can easily be violated in the presence of highly correlated variables. Here we examine the behavior of the Lasso estimators if the irrepresentable condition is relaxed. Even though the Lasso cannot recover the correct sparsity pattern, we show that the estimator is still consistent in the -norm sense for fixed designs under conditions on (a) the number of nonzero components of the vector and (b) the minimal singular values of design matrices that are induced by selecting small subsets of variables. Furthermore, a rate of convergence result is obtained on the error with an appropriate choice of the smoothing parameter. The rate is shown to be optimal under the condition of bounded maximal and minimal sparse eigenvalues. Our results imply that, with high probability, all important variables are selected. The set of selected variables is a meaningful reduction on the original set of variables. Finally, our results are illustrated with the detection of closely adjacent frequencies, a problem encountered in astrophysics.
Recommendations
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- scientific article; zbMATH DE number 5957408
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- Adaptive Lasso for sparse high-dimensional regression models
- On the sensitivity of the Lasso to the number of predictor variables
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 5957506 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A theory for multiresolution signal decomposition: the wavelet representation
- Adaptive Lasso for Cox's proportional hazards model
- Aggregation for Gaussian regression
- Asymptotics for Lasso-type estimators.
- Asymptotics of sample eigenstructure for a large dimensional spiked covariance model
- Decoding by Linear Programming
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Greed is Good: Algorithmic Results for Sparse Approximation
- High-dimensional generalized linear models and the lasso
- High-dimensional graphs and variable selection with the Lasso
- Just relax: convex programming methods for identifying sparse signals in noise
- Least angle regression. (With discussion)
- Local operator theory, random matrices and Banach spaces.
- Model Selection and Estimation in Regression with Grouped Variables
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Recovery of Exact Sparse Representations in the Presence of Bounded Noise
- Relaxed Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse representations in unions of bases
- Sparsity oracle inequalities for the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- THE RESOLUTION OF CLOSELY ADJACENT SPECTRAL LINES
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The Group Lasso for Logistic Regression
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Using circulant symmetry to model featureless objects
Cited in
(only showing first 100 items - show all)- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- On asymptotically optimal confidence regions and tests for high-dimensional models
- On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property
- Regularization for Cox's proportional hazards model with NP-dimensionality
- Least squares after model selection in high-dimensional sparse models
- Influence measures and stability for graphical models
- Sign-constrained least squares estimation for high-dimensional regression
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Pivotal estimation via square-root lasso in nonparametric regression
- Stability Selection
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- On the asymptotic properties of the group lasso estimator for linear models
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Extensions of stability selection using subsamples of observations and covariates
- A comparison of the Lasso and marginal regression
- General nonexact oracle inequalities for classes with a subexponential envelope
- \(L_1\)-regularized least squares for support recovery of high dimensional single index models with Gaussian designs
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- \(\ell_{1}\)-penalization for mixture regression models
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors
- Variable selection in high-dimensional quantile varying coefficient models
- High-dimensional additive modeling
- Near-ideal model selection by \(\ell _{1}\) minimization
- Autoregressive process modeling via the Lasso procedure
- An analysis of penalized interaction models
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Estimation and variable selection with exponential weights
- Adaptive Dantzig density estimation
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Correlated variables in regression: clustering and sparse estimation
- Variable selection in nonparametric additive models
- Regression analysis of locality preserving projections via sparse penalty
- Simultaneous analysis of Lasso and Dantzig selector
- Oracle inequalities for the lasso in the Cox model
- Identifying small mean-reverting portfolios
- Non-asymptotic oracle inequalities for the Lasso and group Lasso in high dimensional logistic model
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Lazy lasso for local regression
- Minimax-optimal nonparametric regression in high dimensions
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Nearly unbiased variable selection under minimax concave penalty
- Strong consistency of Lasso estimators
- Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap
- Statistical significance in high-dimensional linear models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso Inference for High-Dimensional Time Series
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Learning sparse classifiers with difference of convex functions algorithms
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors
- Two tales of variable selection for high dimensional regression: Screening and model building
- Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
- Sub-optimality of some continuous shrinkage priors
- Calibrating nonconvex penalized regression in ultra-high dimension
- Structured estimation for the nonparametric Cox model
- Variable selection in high-dimensional partly linear additive models
- On the conditions used to prove oracle results for the Lasso
- Grouping strategies and thresholding for high dimensional linear models
- Estimation in high-dimensional linear models with deterministic design matrices
- Selection of variables and dimension reduction in high-dimensional non-parametric regression
- High-dimensional variable selection
- Graphical-model based high dimensional generalized linear models
- High-dimensional generalized linear models and the lasso
- Support union recovery in high-dimensional multivariate regression
- Regression on manifolds: estimation of the exterior derivative
- AIC for the Lasso in generalized linear models
- Oracle efficient variable selection in random and fixed effects panel data models
- Multiresolution functional ANOVA for large-scale, many-input computer experiments
- Consistency of Bayesian linear model selection with a growing number of parameters
- Multiscale change point inference. With discussion and authors' reply
- A cluster elastic net for multivariate regression
- Estimation for high-dimensional linear mixed-effects models using \(\ell_1\)-penalization
- Least angle and \(\ell _{1}\) penalized regression: a review
- Semiparametric efficiency bounds for high-dimensional models
- Semi-analytic resampling in Lasso
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- Estimation and variable selection in partial linear single index models with error-prone linear covariates
- A new perspective on least squares under convex constraint
- A two-step method for estimating high-dimensional Gaussian graphical models
- Discussion: One-step sparse estimates in nonconcave penalized likelihood models
- Penalised robust estimators for sparse and high-dimensional linear models
- Focused vector information criterion model selection and model averaging regression with missing response
- Recovery of partly sparse and dense signals
- Bayesian high-dimensional screening via MCMC
- Estimation of high-dimensional partially-observed discrete Markov random fields
- Sparse Recovery With Unknown Variance: A LASSO-Type Approach
- Generalized Kalman smoothing: modeling and algorithms
- A review of Gaussian Markov models for conditional independence
- Lasso, fractional norm and structured sparse estimation using a Hadamard product parametrization
- Variable selection in censored quantile regression with high dimensional data
- MAP model selection in Gaussian regression
- Consistent tuning parameter selection in high dimensional sparse linear regression
- Generalization of constraints for high dimensional regression problems
- Discussion of: ``Grouping strategies and thresholding for high dimension linear models
- PAC-Bayesian estimation and prediction in sparse additive models
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- On the sparsity of Lasso minimizers in sparse data recovery
This page was built for publication: Lasso-type recovery of sparse representations for high-dimensional data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1002157)