Coordinate descent algorithms for lasso penalized regression

From MaRDI portal
Publication:2482976

zbMATH Open1137.62045arXiv0803.3876MaRDI QIDQ2482976FDOQ2482976


Authors: Tong Tong Wu, Kenneth Lange Edit this on Wikidata


Publication date: 30 April 2008

Published in: The Annals of Applied Statistics (Search for Journal in Brave)

Abstract: Imposition of a lasso penalty shrinks parameter estimates toward zero and performs continuous model selection. Lasso penalized regression is capable of handling linear regression problems where the number of predictors far exceeds the number of cases. This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty. The previously known ell2 algorithm is based on cyclic coordinate descent. Our new ell1 algorithm is based on greedy coordinate descent and Edgeworth's algorithm for ordinary ell1 regression. Each algorithm relies on a tuning constant that can be chosen by cross-validation. In some regression problems it is natural to group parameters and penalize parameters group by group rather than separately. If the group penalty is proportional to the Euclidean norm of the parameters of the group, then it is possible to majorize the norm and reduce parameter estimation to ell2 regression with a lasso penalty. Thus, the existing algorithm can be extended to novel settings. Each of the algorithms discussed is tested via either simulated or real data or both. The Appendix proves that a greedy form of the ell2 algorithm converges to the minimum value of the objective function.


Full work available at URL: https://arxiv.org/abs/0803.3876




Recommendations





Cited In (only showing first 100 items - show all)





This page was built for publication: Coordinate descent algorithms for lasso penalized regression

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2482976)