Best subset selection, persistence in high-dimensional statistical learning and optimization under l₁ constraint

From MaRDI portal
Publication:869974

DOI10.1214/009053606000000768zbMATH Open1106.62022arXivmath/0702684OpenAlexW3104950855WikidataQ105584233 ScholiaQ105584233MaRDI QIDQ869974FDOQ869974


Authors: Eitan Greenshtein Edit this on Wikidata


Publication date: 12 March 2007

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: Let (Y,X1,...,Xm) be a random vector. It is desired to predict Y based on (X1,...,Xm). Examples of prediction methods are regression, classification using logistic regression or separating hyperplanes, and so on. We consider the problem of best subset selection, and study it in the context m=nalpha, alpha>1, where n is the number of observations. We investigate procedures that are based on empirical risk minimization. It is shown, that in common cases, we should aim to find the best subset among those of size which is of order o(n/log(n)). It is also shown, that in some ``asymptotic sense, when assuming a certain sparsity condition, there is no loss in letting m be much larger than n, for example, m=nalpha,alpha>1. This is in comparison to starting with the ``best subset of size smaller than n and regardless of the value of alpha. We then study conditions under which empirical risk minimization subject to l1 constraint yields nearly the best subset. These results extend some recent results obtained by Greenshtein and Ritov. Finally we present a high-dimensional simulation study of a ``boosting type classification procedure.


Full work available at URL: https://arxiv.org/abs/math/0702684




Recommendations




Cites Work


Cited In (45)

Uses Software





This page was built for publication: Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q869974)