Principal component-guided sparse regression
From MaRDI portal
Abstract: We propose a new method for supervised learning, especially suited to wide data where the number of features is much greater than the number of observations. The method combines the lasso () sparsity penalty with a quadratic penalty that shrinks the coefficient vector toward the leading principal components of the feature matrix. We call the proposed method the "principal components lasso" ("pcLasso"). The method can be especially powerful if the features are pre-assigned to groups (such as cell-pathways, assays or protein interaction networks). In that case, pcLasso shrinks each group-wise component of the solution toward the leading principal components of that group. In the process, it also carries out selection of the feature groups. We provide some theory for this method and illustrate it on a number of simulated and real data examples.
Recommendations
Cited in
(12)- A component Lasso
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms
- Sparse principal component regression for generalized linear models
- An analytical shrinkage estimator for linear regression
- Sparse principal component regression with adaptive loading
- A simple method to improve principal components regression
- Testing significance of features by lassoed principal components
- Alternative penalty functions for penalized likelihood principal components
- Penalized orthogonal-components regression for large \(p\) small \(n\) data
- Sparse principal component regression via singular value decomposition approach
- A guide for sparse PCA: model comparison and applications
- pcLasso
This page was built for publication: Principal component-guided sparse regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q135079)