Improved variable selection with forward-lasso adaptive shrinkage
From MaRDI portal
Abstract: Recently, considerable interest has focused on variable selection methods in regression situations where the number of predictors, , is large relative to the number of observations, . Two commonly applied variable selection approaches are the Lasso, which computes highly shrunk regression coefficients, and Forward Selection, which uses no shrinkage. We propose a new approach, "Forward-Lasso Adaptive SHrinkage" (FLASH), which includes the Lasso and Forward Selection as special cases, and can be used in both the linear regression and the Generalized Linear Model domains. As with the Lasso and Forward Selection, FLASH iteratively adds one variable to the model in a hierarchical fashion but, unlike these methods, at each step adjusts the level of shrinkage so as to optimize the selection of the next variable. We first present FLASH in the linear regression setting and show that it can be fitted using a variant of the computationally efficient LARS algorithm. Then, we extend FLASH to the GLM domain and demonstrate, through numerous simulations and real world data sets, as well as some theoretical analysis, that FLASH generally outperforms many competing approaches.
Recommendations
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Statistical View of Some Chemometrics Regression Tools
- A generalized Dantzig selector with shrinkage tuning
- Adaptive Lasso for sparse high-dimensional regression models
- FIRST: combining forward iterative selection and shrinkage in high dimensional sparse linear regression
- Least angle regression. (With discussion)
- Pathwise coordinate optimization
- Regularization and Variable Selection Via the Elastic Net
- Relaxed Lasso
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Variable inclusion and shrinkage algorithms
Cited in
(27)- Variable selection for high dimensional Gaussian copula regression model: an adaptive hypothesis testing procedure
- Variable selection and forecasting via automated methods for linear models: LASSO/adaLASSO and autometrics
- The predictive Lasso
- Fast FSR Variable Selection with Applications to Clinical Trials
- Interpretable dimension reduction for classifying functional data
- Variable selection in functional additive regression models
- Dealing with big data: comparing dimension reduction and shrinkage regression methods
- Forward-backward selection with early dropping
- Principal minimax support vector machine for sufficient dimension reduction with contaminated data
- Variable selection for survival data with a class of adaptive elastic net techniques
- Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data
- FIRST: combining forward iterative selection and shrinkage in high dimensional sparse linear regression
- Joint estimation and variable selection for mean and dispersion in proper dispersion models
- Variable inclusion and shrinkage algorithms
- Robust regression: an inferential method for determining which independent variables are most important
- Fast forward selection for generalized estimating equations with a large number of predictor variables
- Variable Selection and Shrinkage: Comparison of Some Approaches
- Long-term time series prediction using OP-ELM
- Decomposition feature selection with applications in detecting correlated biomarkers of bipolar disorders
- Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Quantile forward regression for high-dimensional survival data
- Lookahead and piloting strategies for variable selection
- Regularization for electricity price forecasting
- Variable selection via RIVAL (removing irrelevant variables amidst lasso iterations) and its application to nuclear material detection
- Sequential Lasso cum EBIC for feature selection with ultra-high dimensional feature space
- Sparse least trimmed squares regression for analyzing high-dimensional large data sets
- In defense of LASSO
This page was built for publication: Improved variable selection with forward-lasso adaptive shrinkage
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q542500)