Some sharp performance bounds for least squares regression with L₁ regularization

From MaRDI portal
Publication:834334

DOI10.1214/08-AOS659zbMATH Open1173.62029arXiv0908.2869OpenAlexW3104148512WikidataQ105584240 ScholiaQ105584240MaRDI QIDQ834334FDOQ834334


Authors: N. E. Zubov Edit this on Wikidata


Publication date: 19 August 2009

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: We derive sharp performance bounds for least squares regression with L1 regularization from parameter estimation accuracy and feature selection quality perspectives. The main result proved for L1 regularization extends a similar result in [Ann. Statist. 35 (2007) 2313--2351] for the Dantzig selector. It gives an affirmative answer to an open question in [Ann. Statist. 35 (2007) 2358--2364]. Moreover, the result leads to an extended view of feature selection that allows less restrictive conditions than some recent work. Based on the theoretical insights, a novel two-stage L1-regularization procedure with selective penalization is analyzed. It is shown that if the target parameter vector can be decomposed as the sum of a sparse parameter vector with large coefficients and another less sparse vector with relatively small coefficients, then the two-stage procedure can lead to improved performance.


Full work available at URL: https://arxiv.org/abs/0908.2869




Recommendations




Cites Work


Cited In (58)





This page was built for publication: Some sharp performance bounds for least squares regression with \(L_1\) regularization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q834334)