APPLE: approximate path for penalized likelihood estimators
From MaRDI portal
(Redirected from Publication:746326)
Abstract: In high-dimensional data analysis, penalized likelihood estimators are shown to provide superior results in both variable selection and parameter estimation. A new algorithm, APPLE, is proposed for calculating the Approximate Path for Penalized Likelihood Estimators. Both the convex penalty (such as LASSO) and the nonconvex penalty (such as SCAD and MCP) cases are considered. The APPLE efficiently computes the solution path for the penalized likelihood estimator using a hybrid of the modified predictor-corrector method and the coordinate-descent algorithm. APPLE is compared with several well-known packages via simulation and analysis of two gene expression data sets.
Recommendations
- Paths Following Algorithm for Penalized Logistic Regression Using SCAD and MCP
- Penalized likelihood regression: General formulation and efficient approximation
- One-step sparse estimates in nonconcave penalized likelihood models
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- \(L_{1}\) penalized estimation in the Cox proportional hazards model
Cites work
- scientific article; zbMATH DE number 5957245 (Why is no real title available?)
- scientific article; zbMATH DE number 3945130 (Why is no real title available?)
- scientific article; zbMATH DE number 3444596 (Why is no real title available?)
- L 1-Regularization Path Algorithm for Generalized Linear Models
- A Selective Overview of Variable Selection in High Dimensional Feature Space (Invited Review Article)
- A new approach to variable selection in least squares problems
- An interior-point method for large-scale \(l_1\)-regularized logistic regression
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- An ordinary differential equation-based solution path algorithm
- Classification of gene microarrays by penalized logistic regression
- Coordinate descent algorithms for lasso penalized regression
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Correction: Strong oracle optimality of folded concave penalized estimation
- Efficient global approximation of generalized nonlinear \(\ell _{1}\)-regularized solution paths and its applications
- Estimating the dimension of a model
- Extended Bayesian information criteria for model selection with large model spaces
- Group coordinate descent algorithms for nonconvex penalized regression
- High-dimensional generalized linear models and the lasso
- Least angle regression. (With discussion)
- Likelihood adaptively modified penalties
- Nearly unbiased variable selection under minimax concave penalty
- One-step sparse estimates in nonconcave penalized likelihood models
- Pathwise coordinate optimization
- Piecewise linear regularized solution paths
- Risk bounds for model selection via penalization
- Some Comments on C P
- The Group Lasso for Logistic Regression
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Understanding WaveShrink: variance and bias estimation
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(5)- An alternating direction method of multipliers for MCP-penalized regression with high-dimensional data
- Likelihood adaptively modified penalties
- Model selection for high-dimensional quadratic regression via regularization
- APPLE
- Penalized likelihood regression: General formulation and efficient approximation
This page was built for publication: APPLE: approximate path for penalized likelihood estimators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q746326)