Support recovery without incoherence: a case for nonconvex regularization

From MaRDI portal
Publication:682289

DOI10.1214/16-AOS1530zbMATH Open1385.62008arXiv1412.5632OpenAlexW2964346891MaRDI QIDQ682289FDOQ682289


Authors: Po-Ling Loh, Martin J. Wainwright Edit this on Wikidata


Publication date: 14 February 2018

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and ellinfty-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and ellinfty-guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in ell1-based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.


Full work available at URL: https://arxiv.org/abs/1412.5632




Recommendations





Cited In (46)





This page was built for publication: Support recovery without incoherence: a case for nonconvex regularization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q682289)