A component Lasso
From MaRDI portal
Publication:3463403
Abstract: We propose a new sparse regression method called the component lasso, based on a simple idea. The method uses the connected-components structure of the sample covariance matrix to split the problem into smaller ones. It then solves the subproblems separately, obtaining a coefficient vector for each one. Then, it uses non-negative least squares to recombine the different vectors into a single solution. This step is useful in selecting and reweighting components that are correlated with the response. Simulated and real data examples show that the component lasso can outperform standard regression methods such as the lasso and elastic net, achieving a lower mean squared error as well as better support recovery.
Recommendations
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A model-averaging approach for high-dimensional regression
- Asymptotics for Lasso-type estimators.
- Atomic Decomposition by Basis Pursuit
- Averaged gene expressions for regression
- Correlated variables in regression: clustering and sparse estimation
- Covariance-regularized regression and classification for high dimensional problems
- Discussion of ``Correlated variables in regression: clustering and sparse estimation
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Greed is Good: Algorithmic Results for Sparse Approximation
- High-dimensional graphs and variable selection with the Lasso
- Hybrid hierarchical clustering with applications to microarray data
- Just relax: convex programming methods for identifying sparse signals in noise
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- On the conditions used to prove oracle results for the Lasso
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Regularization and Variable Selection Via the Elastic Net
- Relaxed Lasso
- Sign-constrained least squares estimation for high-dimensional regression
- Sparsity oracle inequalities for the Lasso
- The cluster graphical Lasso for improved estimation of Gaussian graphical models
- The graphical lasso: new insights and alternatives
Cited in
(9)- Principal component-guided sparse regression
- Independently interpretable Lasso for generalized linear models
- Component-wisely sparse boosting
- A note on moment inequality for quadratic forms
- scientific article; zbMATH DE number 5957506 (Why is no real title available?)
- A distributed algorithm for Lasso variable selection
- Blockwise sparse regression
- Sparse regression with exact clustering
- Sparse classification with paired covariates
This page was built for publication: A component Lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3463403)