Lasso-type recovery of sparse representations for high-dimensional data
From MaRDI portal
Publication:1002157
Abstract: The Lasso is an attractive technique for regularization and variable selection for high-dimensional data, where the number of predictor variables is potentially much larger than the number of samples . However, it was recently discovered that the sparsity pattern of the Lasso estimator can only be asymptotically identical to the true sparsity pattern if the design matrix satisfies the so-called irrepresentable condition. The latter condition can easily be violated in the presence of highly correlated variables. Here we examine the behavior of the Lasso estimators if the irrepresentable condition is relaxed. Even though the Lasso cannot recover the correct sparsity pattern, we show that the estimator is still consistent in the -norm sense for fixed designs under conditions on (a) the number of nonzero components of the vector and (b) the minimal singular values of design matrices that are induced by selecting small subsets of variables. Furthermore, a rate of convergence result is obtained on the error with an appropriate choice of the smoothing parameter. The rate is shown to be optimal under the condition of bounded maximal and minimal sparse eigenvalues. Our results imply that, with high probability, all important variables are selected. The set of selected variables is a meaningful reduction on the original set of variables. Finally, our results are illustrated with the detection of closely adjacent frequencies, a problem encountered in astrophysics.
Recommendations
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- scientific article; zbMATH DE number 5957408
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- Adaptive Lasso for sparse high-dimensional regression models
- On the sensitivity of the Lasso to the number of predictor variables
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 5957506 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A theory for multiresolution signal decomposition: the wavelet representation
- Adaptive Lasso for Cox's proportional hazards model
- Aggregation for Gaussian regression
- Asymptotics for Lasso-type estimators.
- Asymptotics of sample eigenstructure for a large dimensional spiked covariance model
- Decoding by Linear Programming
- For most large underdetermined systems of linear equations the minimal đ1ânorm solution is also the sparsest solution
- Greed is Good: Algorithmic Results for Sparse Approximation
- High-dimensional generalized linear models and the lasso
- High-dimensional graphs and variable selection with the Lasso
- Just relax: convex programming methods for identifying sparse signals in noise
- Least angle regression. (With discussion)
- Local operator theory, random matrices and Banach spaces.
- Model Selection and Estimation in Regression with Grouped Variables
- Optimally sparse representation in general (nonorthogonal) dictionaries via â 1 minimization
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Recovery of Exact Sparse Representations in the Presence of Bounded Noise
- Relaxed Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse representations in unions of bases
- Sparsity oracle inequalities for the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- THE RESOLUTION OF CLOSELY ADJACENT SPECTRAL LINES
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The Group Lasso for Logistic Regression
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Using circulant symmetry to model featureless objects
Cited in
(only showing first 100 items - show all)- Penalised robust estimators for sparse and high-dimensional linear models
- Focused vector information criterion model selection and model averaging regression with missing response
- Recovery of partly sparse and dense signals
- Bayesian high-dimensional screening via MCMC
- Estimation of high-dimensional partially-observed discrete Markov random fields
- Sparse Recovery With Unknown Variance: A LASSO-Type Approach
- Generalized Kalman smoothing: modeling and algorithms
- A review of Gaussian Markov models for conditional independence
- Lasso, fractional norm and structured sparse estimation using a Hadamard product parametrization
- Variable selection in censored quantile regression with high dimensional data
- MAP model selection in Gaussian regression
- Consistent tuning parameter selection in high dimensional sparse linear regression
- Generalization of constraints for high dimensional regression problems
- Discussion of: ``Grouping strategies and thresholding for high dimension linear models
- PAC-Bayesian estimation and prediction in sparse additive models
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- On the sparsity of Lasso minimizers in sparse data recovery
- A unified penalized method for sparse additive quantile models: an RKHS approach
- Regularization and the small-ball method. I: Sparse recovery
- A generalized elastic net regularization with smoothed \(\ell _{q}\) penalty for sparse vector recovery
- Discussion: One-step sparse estimates in nonconcave penalized likelihood models
- High dimensional single index models
- Nonparametric and high-dimensional functional graphical models
- Cross-validation with confidence
- Model selection consistency of Lasso for empirical data
- Minimization of \(L_1\) over \(L_2\) for sparse signal recovery with convergence guarantee
- The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods
- Generalized M-estimators for high-dimensional Tobit I models
- Searching for minimal optimal neural networks
- Convex biclustering
- A doubly sparse approach for group variable selection
- Monte Carlo simulation for Lasso-type problems by estimator augmentation
- The Lasso for high dimensional regression with a possible change point
- Least squares approximation with a diverging number of parameters
- Greedy variance estimation for the LASSO
- Sparse Laplacian shrinkage with the graphical Lasso estimator for regression problems
- Information-Based Optimal Subdata Selection for Big Data Linear Regression
- An Augmented Lagrangian Method for $\ell_{1}$-Regularized Optimization Problems with Orthogonality Constraints
- Approximate spectral gaps for Markov chain mixing times in high dimensions
- A new hybrid \(l_p\)-\(l_2\) model for sparse solutions with applications to image processing
- Robust machine learning by median-of-means: theory and practice
- An \(L_1\)-regularized logistic model for detecting short-term neuronal interactions
- High-dimensional Bayesian inference in nonparametric additive models
- Estimation of average treatment effects with panel data: asymptotic theory and implementation
- Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression
- Transductive versions of the Lasso and the Dantzig selector
- Recovery of seismic wavefields by an \(l_{q}\)-norm constrained regularization method
- SCADâpenalized quantile regression for highâdimensional data analysis and variable selection
- High-dimensional sparse portfolio selection with nonnegative constraint
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- On asymptotically optimal confidence regions and tests for high-dimensional models
- On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property
- Regularization for Cox's proportional hazards model with NP-dimensionality
- Least squares after model selection in high-dimensional sparse models
- Influence measures and stability for graphical models
- Sign-constrained least squares estimation for high-dimensional regression
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Pivotal estimation via square-root lasso in nonparametric regression
- Stability Selection
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- On the asymptotic properties of the group lasso estimator for linear models
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Extensions of stability selection using subsamples of observations and covariates
- A comparison of the Lasso and marginal regression
- General nonexact oracle inequalities for classes with a subexponential envelope
- \(L_1\)-regularized least squares for support recovery of high dimensional single index models with Gaussian designs
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- \(\ell_{1}\)-penalization for mixture regression models
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors
- Variable selection in high-dimensional quantile varying coefficient models
- High-dimensional additive modeling
- Near-ideal model selection by \(\ell _{1}\) minimization
- Autoregressive process modeling via the Lasso procedure
- An analysis of penalized interaction models
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Estimation and variable selection with exponential weights
- Adaptive Dantzig density estimation
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Correlated variables in regression: clustering and sparse estimation
- Variable selection in nonparametric additive models
- Regression analysis of locality preserving projections via sparse penalty
- Simultaneous analysis of Lasso and Dantzig selector
- Oracle inequalities for the lasso in the Cox model
- Identifying small mean-reverting portfolios
- Non-asymptotic oracle inequalities for the Lasso and group Lasso in high dimensional logistic model
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Lazy lasso for local regression
- Minimax-optimal nonparametric regression in high dimensions
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Nearly unbiased variable selection under minimax concave penalty
- Strong consistency of Lasso estimators
- Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap
- Statistical significance in high-dimensional linear models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso Inference for High-Dimensional Time Series
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Learning sparse classifiers with difference of convex functions algorithms
This page was built for publication: Lasso-type recovery of sparse representations for high-dimensional data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1002157)