Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization

From MaRDI portal
Revision as of 08:29, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1763096


DOI10.3150/bj/1106314846zbMath1055.62078WikidataQ105584435 ScholiaQ105584435MaRDI QIDQ1763096

Ya'acov Ritov, Eitan Greenshtein

Publication date: 21 February 2005

Published in: Bernoulli (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.3150/bj/1106314846


62H99: Multivariate analysis

62F30: Parametric inference under constraints

62A01: Foundations and philosophical topics in statistics


Related Items

Sure independence screening in generalized linear models with NP-dimensionality, Nearly unbiased variable selection under minimax concave penalty, A unified approach to model selection and sparse recovery using regularized least squares, Empirical processes with a bounded \(\psi_1\) diameter, \(\ell_{1}\)-penalization for mixture regression models, Comments on: \(\ell _{1}\)-penalization for mixture regression models, Consistent group selection in high-dimensional linear regression, Near-ideal model selection by \(\ell _{1}\) minimization, High-dimensional variable selection, Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint, Regularization in statistics, The sparsity and bias of the LASSO selection in high-dimensional linear regression, Least angle and \(\ell _{1}\) penalized regression: a review, SPADES and mixture models, High-dimensional classification using features annealed independence rules, Lasso-type recovery of sparse representations for high-dimensional data, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, High-dimensional additive modeling, Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations, Time varying undirected graphs, Persistence of plug-in rule in classification of high dimensional multivariate binary data, Simultaneous analysis of Lasso and Dantzig selector, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities, The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder)., Boosting for high-dimensional linear models, High-dimensional graphs and variable selection with the Lasso, On the Consistency of Bayesian Variable Selection for High Dimensional Binary Regression and Classification


Uses Software


Cites Work