swMATH8159CRANglmnetMaRDI QIDQ20169FDOQ20169
Lasso and Elastic-Net Regularized Generalized Linear Models
Rob Tibshirani, Trevor Hastie, Jerome Friedman, Noah Simon, James Yang, Kenneth Tay, Balasubramanian Narasimhan
Last update: 22 August 2023
Copyright license: GNU General Public License, version 2.0
Software version identifier: 4.1-7, 1.1-1, 1.1-2, 1.1-3, 1.1-4, 1.1-5, 1.1, 1.2, 1.3, 1.4, 1.5.1, 1.5.2, 1.5.3, 1.5, 1.6, 1.7.1, 1.7.3, 1.7.4, 1.7, 1.8-2, 1.8-4, 1.8-5, 1.8, 1.9-1, 1.9-3, 1.9-5, 1.9-8, 2.0-1, 2.0-2, 2.0-3, 2.0-4, 2.0-5, 2.0-8, 2.0-9, 2.0-10, 2.0-12, 2.0-13, 2.0-16, 2.0-18, 3.0-1, 3.0-2, 3.0, 4.0-2, 4.0, 4.1-1, 4.1-2, 4.1-3, 4.1-4, 4.1-6, 4.1, 4.1-8
Official website: http://cran.r-project.org/web/packages/glmnet/index.html
Source code repository: https://github.com/cran/glmnet
Extremely efficient procedures for fitting the entire lasso or elastic-net regularization path for linear regression, logistic and multinomial regression models, Poisson regression, Cox model, multiple-response Gaussian, and the grouped multinomial regression; see <doi:10.18637/jss.v033.i01> and <doi:10.18637/jss.v039.i05>. There are two new and important additions. The family argument can be a GLM family object, which opens the door to any programmed family (<doi:10.18637/jss.v106.i01>). This comes with a modest computational cost, so when the built-in families suffice, they should be used instead. The other novelty is the relax option, which refits each of the active sets in the path unpenalized. The algorithm uses cyclical coordinate descent in a path-wise fashion, as described in the papers cited.
Cited In (only showing first 100 items - show all)
- Model selection for high-dimensional quadratic regression via regularization
- Fast best subset selection: coordinate descent and local combinatorial optimization algorithms
- Multiple choice from competing regression models under multicollinearity based on standardized update
- Rare feature selection in high dimensions
- Combined \(\ell_1\) and greedy \(\ell_0\) penalized least squares for linear model selection
- Hierarchical inference for genome-wide association studies: a view on methodology with software
- A statistical mechanics approach to de-biasing and uncertainty estimation in Lasso for random measurements
- A penalty approach to differential item functioning in Rasch models
- Oblique random survival forests
- Variance prior forms for high-dimensional Bayesian variable selection
- A variable selection approach in the multivariate linear model: an application to LC-MS metabolomics data
- Shrinkage priors for Bayesian penalized regression
- Data shared Lasso: a novel tool to discover uplift
- IPF-LASSO: integrative \(L_1\)-penalized regression with penalty factors for prediction based on multi-omics data
- Majorization-minimization algorithms for nonsmoothly penalized objective functions
- A statistical pipeline for identifying physical features that differentiate classes of 3D shapes
- Targeted learning in data science. Causal inference for complex longitudinal studies
- Stability of feature selection in classification issues for high-dimensional correlated data
- Fast inference in generalized linear models via expected log-likelihoods
- Eigenvector-based sparse canonical correlation analysis: fast computation for estimation of multiple canonical vectors
- Interpretable dimension reduction for classifying functional data
- Sparse group Lasso and high dimensional multinomial classification
- Integrative analysis of cancer diagnosis studies with composite penalization
- Adaptive Lasso estimators for ultrahigh dimensional generalized linear models
- Sparse directed acyclic graphs incorporating the covariates
- Covariance-regularized regression and classification for high dimensional problems
- A significance test for the lasso
- Sparse semiparametric discriminant analysis
- Stable graphical model estimation with random forests for discrete, continuous, and mixed variables
- The spike-and-slab LASSO
- Fitting very large sparse Gaussian graphical models
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- A uniform framework for the combination of penalties in generalized structured models
- Multicategory large margin classification methods: hinge losses vs. coherence functions
- Sparse classification: a scalable discrete optimization perspective
- Discussion: ``A significance test for the lasso
- Visualization and assessment of model selection uncertainty
- High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking
- A direct estimation of high dimensional stationary vector autoregressions
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods
- High-dimensional Ising model selection with Bayesian information criteria
- A direct approach to sparse discriminant analysis in ultra-high dimensions
- Variable selection in linear mixed models using an extended class of penalties
- Estimation for high-dimensional linear mixed-effects models using \(\ell_1\)-penalization
- Differential Markov random field analysis with an application to detecting differential microbial community networks
- Title not available (Why is that?)
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Penalized methods for bi-level variable selection
- The group exponential Lasso for bi-level variable selection
- MM for penalized estimation
- Variable selection in discrete survival models including heterogeneity
- Estimating treatment effect heterogeneity in randomized program evaluation
- Penalized estimation of directed acyclic graphs from discrete data
- Model selection for factorial Gaussian graphical models with an application to dynamic regulatory networks
- A generalized additive model approach to time-to-event analysis
- Conducting sparse feature selection on arbitrarily long phrases in text corpora with a focus on interpretability
- Model-assisted inference for treatment effects using regularized calibrated estimation with high-dimensional data
- Interpretable sparse SIR for functional data
- Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error
- Factor-Adjusted Regularized Model Selection
- A lasso for hierarchical interactions
- Quadratic Majorization for Nonconvex Loss with Applications to the Boosting Algorithm
- Advances in integrative statistics for logic programming
- Variable selection for BART: an application to gene regulation
- Complete subset regressions
- A penalized approach to covariate selection through quantile regression coefficient models
- Sparse classification with paired covariates
- Group inference in high dimensions with applications to hierarchical testing
- Identification of biomarker‐by‐treatment interactions in randomized clinical trials with survival outcomes and high‐dimensional spaces
- On asymptotically optimal confidence regions and tests for high-dimensional models
- APPLE: approximate path for penalized likelihood estimators
- Correlated variables in regression: clustering and sparse estimation
- A cocktail algorithm for solving the elastic net penalized Cox's regression in high dimensions
- Majorization minimization by coordinate descent for concave penalized generalized linear models
- Missing values: sparse inverse covariance estimation and an extension to sparse regression
- Estimation of variance components, heritability and the ridge penalty in high-dimensional generalized linear models
- Regularization for Cox's proportional hazards model with NP-dimensionality
- Penalized Estimation and Forecasting of Multiple Subject Intensive Longitudinal Data
- Variable selection for varying dispersion beta regression model
- Testing conditional independence in supervised learning algorithms
- A fast unified algorithm for solving group-lasso penalize learning problems
- \(\ell_{1}\)-penalization for mixture regression models
- Stable Multiple Time Step Simulation/Prediction From Lagged Dynamic Network Regression Models
- Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates
- Regularized multivariate regression for identifying master predictors with application to integrative genomics study of breast cancer
- Main effects and interactions in mixed and incomplete data frames
- A modified local quadratic approximation algorithm for penalized optimization problems
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors
- Quasi-likelihood and/or robust estimation in high dimensions
- Supersparse linear integer models for optimized medical scoring systems
- NESTA: A fast and accurate first-order method for sparse recovery
- LIBLINEAR: a library for large linear classification
- Templates for convex cone problems with applications to sparse signal recovery
- High-dimensional multivariate posterior consistency under global-local shrinkage priors
- Feature selection and tumor classification for microarray data using relaxed Lasso and generalized multi-class support vector machine
- Variable and boundary selection for functional data via multiclass logistic regression modeling
- Accelerating cross-validation in multinomial logistic regression with \(\ell_1\)-regularization
- Spatial variable selection and an application to Virginia Lyme disease emergence
- Goodness-of-fit tests for functional linear models based on integrated projections
This page was built for software: glmnet