A component Lasso
From MaRDI portal
Publication:3463403
DOI10.1002/CJS.11267zbMATH Open1329.62326arXiv1311.4472OpenAlexW2963982265MaRDI QIDQ3463403FDOQ3463403
Authors: Nadine Hussami, Robert Tibshirani
Publication date: 14 January 2016
Published in: The Canadian Journal of Statistics (Search for Journal in Brave)
Abstract: We propose a new sparse regression method called the component lasso, based on a simple idea. The method uses the connected-components structure of the sample covariance matrix to split the problem into smaller ones. It then solves the subproblems separately, obtaining a coefficient vector for each one. Then, it uses non-negative least squares to recombine the different vectors into a single solution. This step is useful in selecting and reweighting components that are correlated with the response. Simulated and real data examples show that the component lasso can outperform standard regression methods such as the lasso and elastic net, achieving a lower mean squared error as well as better support recovery.
Full work available at URL: https://arxiv.org/abs/1311.4472
Recommendations
Lassoelastic netsparsitygraphical Lassoconnected componentsnon-negative least squaresstrong irrepresentable condition
Cites Work
- The graphical lasso: new insights and alternatives
- Covariance-regularized regression and classification for high dimensional problems
- Correlated variables in regression: clustering and sparse estimation
- Title not available (Why is that?)
- On the conditions used to prove oracle results for the Lasso
- High-dimensional graphs and variable selection with the Lasso
- Title not available (Why is that?)
- Atomic Decomposition by Basis Pursuit
- Regularization and Variable Selection Via the Elastic Net
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Asymptotics for Lasso-type estimators.
- Sign-constrained least squares estimation for high-dimensional regression
- Sparsity oracle inequalities for the Lasso
- Relaxed Lasso
- Just relax: convex programming methods for identifying sparse signals in noise
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Hybrid hierarchical clustering with applications to microarray data
- Greed is Good: Algorithmic Results for Sparse Approximation
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Discussion of ``Correlated variables in regression: clustering and sparse estimation
- Averaged gene expressions for regression
- A Model-Averaging Approach for High-Dimensional Regression
- The cluster graphical Lasso for improved estimation of Gaussian graphical models
Cited In (3)
Uses Software
This page was built for publication: A component Lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3463403)