A component Lasso
From MaRDI portal
Publication:3463403
DOI10.1002/CJS.11267zbMATH Open1329.62326OpenAlexW2963982265MaRDI QIDQ3463403FDOQ3463403
Authors: Nadine Hussami, Robert Tibshirani
Publication date: 14 January 2016
Published in: The Canadian Journal of Statistics (Search for Journal in Brave)
Abstract: We propose a new sparse regression method called the component lasso, based on a simple idea. The method uses the connected-components structure of the sample covariance matrix to split the problem into smaller ones. It then solves the subproblems separately, obtaining a coefficient vector for each one. Then, it uses non-negative least squares to recombine the different vectors into a single solution. This step is useful in selecting and reweighting components that are correlated with the response. Simulated and real data examples show that the component lasso can outperform standard regression methods such as the lasso and elastic net, achieving a lower mean squared error as well as better support recovery.
Full work available at URL: https://arxiv.org/abs/1311.4472
Recommendations
Lassoelastic netsparsitygraphical Lassoconnected componentsnon-negative least squaresstrong irrepresentable condition
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- A model-averaging approach for high-dimensional regression
- Asymptotics for Lasso-type estimators.
- Atomic Decomposition by Basis Pursuit
- Averaged gene expressions for regression
- Correlated variables in regression: clustering and sparse estimation
- Covariance-regularized regression and classification for high dimensional problems
- Discussion of ``Correlated variables in regression: clustering and sparse estimation
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Greed is Good: Algorithmic Results for Sparse Approximation
- High-dimensional graphs and variable selection with the Lasso
- Hybrid hierarchical clustering with applications to microarray data
- Just relax: convex programming methods for identifying sparse signals in noise
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- On the conditions used to prove oracle results for the Lasso
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Regularization and Variable Selection Via the Elastic Net
- Relaxed Lasso
- Sign-constrained least squares estimation for high-dimensional regression
- Sparsity oracle inequalities for the Lasso
- The cluster graphical Lasso for improved estimation of Gaussian graphical models
- The graphical lasso: new insights and alternatives
Cited In (9)
- Sparse regression with exact clustering
- Component-wisely sparse boosting
- A note on moment inequality for quadratic forms
- Blockwise sparse regression
- Sparse classification with paired covariates
- Independently interpretable Lasso for generalized linear models
- A distributed algorithm for Lasso variable selection
- Title not available (Why is that?)
- Principal component-guided sparse regression
Uses Software
This page was built for publication: A component Lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3463403)