Lasso-type recovery of sparse representations for high-dimensional data
From MaRDI portal
(Redirected from Publication:1002157)
Abstract: The Lasso is an attractive technique for regularization and variable selection for high-dimensional data, where the number of predictor variables is potentially much larger than the number of samples . However, it was recently discovered that the sparsity pattern of the Lasso estimator can only be asymptotically identical to the true sparsity pattern if the design matrix satisfies the so-called irrepresentable condition. The latter condition can easily be violated in the presence of highly correlated variables. Here we examine the behavior of the Lasso estimators if the irrepresentable condition is relaxed. Even though the Lasso cannot recover the correct sparsity pattern, we show that the estimator is still consistent in the -norm sense for fixed designs under conditions on (a) the number of nonzero components of the vector and (b) the minimal singular values of design matrices that are induced by selecting small subsets of variables. Furthermore, a rate of convergence result is obtained on the error with an appropriate choice of the smoothing parameter. The rate is shown to be optimal under the condition of bounded maximal and minimal sparse eigenvalues. Our results imply that, with high probability, all important variables are selected. The set of selected variables is a meaningful reduction on the original set of variables. Finally, our results are illustrated with the detection of closely adjacent frequencies, a problem encountered in astrophysics.
Recommendations
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- scientific article; zbMATH DE number 5957408
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- Adaptive Lasso for sparse high-dimensional regression models
- On the sensitivity of the Lasso to the number of predictor variables
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 5957506 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A theory for multiresolution signal decomposition: the wavelet representation
- Adaptive Lasso for Cox's proportional hazards model
- Aggregation for Gaussian regression
- Asymptotics for Lasso-type estimators.
- Asymptotics of sample eigenstructure for a large dimensional spiked covariance model
- Decoding by Linear Programming
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Greed is Good: Algorithmic Results for Sparse Approximation
- High-dimensional generalized linear models and the lasso
- High-dimensional graphs and variable selection with the Lasso
- Just relax: convex programming methods for identifying sparse signals in noise
- Least angle regression. (With discussion)
- Local operator theory, random matrices and Banach spaces.
- Model Selection and Estimation in Regression with Grouped Variables
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Recovery of Exact Sparse Representations in the Presence of Bounded Noise
- Relaxed Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse representations in unions of bases
- Sparsity oracle inequalities for the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- THE RESOLUTION OF CLOSELY ADJACENT SPECTRAL LINES
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The Group Lasso for Logistic Regression
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Using circulant symmetry to model featureless objects
Cited in
(only showing first 100 items - show all)- Discussion: One-step sparse estimates in nonconcave penalized likelihood models
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
- Identifying small mean-reverting portfolios
- Learning sparse classifiers with difference of convex functions algorithms
- On estimation error bounds of the Elastic Net when p ≫ n
- Variable selection and forecasting via automated methods for linear models: LASSO/adaLASSO and autometrics
- Discussion of: ``Grouping strategies and thresholding for high dimension linear models
- A new approach for ultrahigh dimensional precision matrix estimation
- Generalized nonconvex hyperspectral anomaly detection via background representation learning with dictionary constraint
- Some sharp performance bounds for least squares regression with L₁ regularization
- Recovery of partly sparse and dense signals
- Estimation and variable selection in partial linear single index models with error-prone linear covariates
- Grouping strategies and thresholding for high dimensional linear models
- Penalized and ridge-type shrinkage estimators in Poisson regression model
- On the conditions used to prove oracle results for the Lasso
- Multivariate sparse Laplacian shrinkage for joint estimation of two graphical structures
- Regression on manifolds: estimation of the exterior derivative
- The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods
- Lazy lasso for local regression
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Adaptive Dantzig density estimation
- Least squares after model selection in high-dimensional sparse models
- Sign-constrained least squares estimation for high-dimensional regression
- A global homogeneity test for high-dimensional linear regression
- Fuzzy Lasso regression model with exact explanatory variables and fuzzy responses
- Adaptive Algorithm for Multi-Armed Bandit Problem with High-Dimensional Covariates
- Influence measures and stability for graphical models
- Understanding large text corpora via sparse machine learning
- Sub-optimality of some continuous shrinkage priors
- Multiscale change point inference. With discussion and authors' reply
- Non-asymptotic oracle inequalities for the Lasso and group Lasso in high dimensional logistic model
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Simultaneous analysis of Lasso and Dantzig selector
- Selection of variables and dimension reduction in high-dimensional non-parametric regression
- High-dimensional generalized linear models and the lasso
- Optimal learning
- An analysis of penalized interaction models
- Regularization and the small-ball method. I: Sparse recovery
- Multiresolution functional ANOVA for large-scale, many-input computer experiments
- Model selection consistency of Lasso for empirical data
- Inference under Fine-Gray competing risks model with high-dimensional covariates
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- Generalized Kalman smoothing: modeling and algorithms
- Detecting groups in large vector autoregressions
- Joint sparse optimization: lower-order regularization method and application in cell fate conversion
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso
- Greedy variance estimation for the LASSO
- Graphical-model based high dimensional generalized linear models
- Minimax-optimal nonparametric regression in high dimensions
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- A two-step method for estimating high-dimensional Gaussian graphical models
- Local restrained condition on sparse recovery in the high dimensional linear regression
- Penalised robust estimators for sparse and high-dimensional linear models
- Estimation and variable selection with exponential weights
- An \(L_1\)-regularized logistic model for detecting short-term neuronal interactions
- High dimensional single index models
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- A reduced half thresholding algorithm
- Robust machine learning by median-of-means: theory and practice
- Near-ideal model selection by \(\ell _{1}\) minimization
- Cross-validation with confidence
- A three-stage approach to identify biomarker signatures for cancer genetic data with survival endpoints
- Lasso–type and Heuristic Strategies in Model Selection and Forecasting
- Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO
- A comparison of the Lasso and marginal regression
- Variable selection in high-dimensional partly linear additive models
- An overview of reciprocal \(L_1\)-regularization for high dimensional regression data
- Convex biclustering
- High-dimensional mean estimation via \(\ell_1\) penalized normal likelihood
- Transductive versions of the Lasso and the Dantzig selector
- A new perspective on least squares under convex constraint
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- Estimation for high-dimensional linear mixed-effects models using \(\ell_1\)-penalization
- Variable selection in censored quantile regression with high dimensional data
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- On the asymptotic properties of the group lasso estimator for linear models
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Rotation to sparse loadings using \(L^p\) losses and related inference problems
- AIC for the Lasso in generalized linear models
- Generalization of constraints for high dimensional regression problems
- Ridge regression revisited: debiasing, thresholding and bootstrap
- PAC-Bayesian estimation and prediction in sparse additive models
- Sparse Recovery With Unknown Variance: A LASSO-Type Approach
- A unified penalized method for sparse additive quantile models: an RKHS approach
- Which bridge estimator is the best for variable selection?
- Variance estimation based on blocked \(3\times 2\) cross-validation in high-dimensional linear regression
- Focused vector information criterion model selection and model averaging regression with missing response
- Consistency of Bayesian linear model selection with a growing number of parameters
- A new approach for ultrahigh-dimensional covariance matrix estimation
- An error bound for \(L_1\)-norm support vector machine coefficients in ultra-high dimension
- High-dimensional sparse portfolio selection with nonnegative constraint
- \(L_1\)-regularized least squares for support recovery of high dimensional single index models with Gaussian designs
- Least squares approximation with a diverging number of parameters
- Searching for minimal optimal neural networks
- Recovery of seismic wavefields by an \(l_{q}\)-norm constrained regularization method
This page was built for publication: Lasso-type recovery of sparse representations for high-dimensional data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1002157)