Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
From MaRDI portal
Publication:1763096
DOI10.3150/bj/1106314846zbMath1055.62078OpenAlexW2071168995WikidataQ105584435 ScholiaQ105584435MaRDI QIDQ1763096
Ya'acov Ritov, Eitan Greenshtein
Publication date: 21 February 2005
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3150/bj/1106314846
Multivariate analysis (62H99) Parametric inference under constraints (62F30) Foundations and philosophical topics in statistics (62A01)
Related Items
Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, Nonparametric Prediction Distribution from Resolution-Wise Regression with Heterogeneous Data, Unnamed Item, Canonical thresholding for nonsparse high-dimensional linear regression, Greedy algorithms for prediction, Best subset selection via a modern optimization lens, Nonparametric empirical Bayes estimator in simultaneous estimation of Poisson means with application to mass spectrometry data, Near-ideal model selection by \(\ell _{1}\) minimization, High-dimensional variable selection, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Aggregated hold out for sparse linear regression with a robust loss function, Adaptive threshold-based classification of sparse high-dimensional data, Robust inference of risks of large portfolios, Persistence of plug-in rule in classification of high dimensional multivariate binary data, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Ridge regression revisited: debiasing, thresholding and bootstrap, A component lasso, Unnamed Item, Simultaneous analysis of Lasso and Dantzig selector, On stepwise pattern recovery of the fused Lasso, Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint, Large-scale multivariate sparse regression with applications to UK Biobank, Oracle inequalities for the lasso in the Cox model, Best subset binary prediction, Empirical processes with a bounded \(\psi_1\) diameter, Statistical significance in high-dimensional linear models, Penalized profiled semiparametric estimating functions, Regularization in statistics, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, \(\ell_{1}\)-penalization for mixture regression models, Comments on: \(\ell _{1}\)-penalization for mixture regression models, Rapid penalized likelihood-based outlier detection via heteroskedasticity test, Consistent group selection in high-dimensional linear regression, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Group selection in high-dimensional partially linear additive models, On the asymptotic properties of the group lasso estimator for linear models, MAP model selection in Gaussian regression, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), The log-linear group-lasso estimator and its asymptotic properties, Structured, sparse regression with application to HIV drug resistance, Kullback-Leibler aggregation and misspecified generalized linear models, Boosting algorithms: regularization, prediction and model fitting, Time varying undirected graphs, Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities, A new perspective on least squares under convex constraint, The sparsity and bias of the LASSO selection in high-dimensional linear regression, From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation, Oracle inequalities for high-dimensional prediction, The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder)., A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Vast Portfolio Selection With Gross-Exposure Constraints, Testing homogeneity of proportions from sparse binomial data with a large number of groups, Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Nonparametric time series forecasting with dynamic updating, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Least angle and \(\ell _{1}\) penalized regression: a review, Sure independence screening in generalized linear models with NP-dimensionality, On asymptotically optimal confidence regions and tests for high-dimensional models, Lasso Inference for High-Dimensional Time Series, Boosting for high-dimensional linear models, High-dimensional graphs and variable selection with the Lasso, Nearly unbiased variable selection under minimax concave penalty, Model selection in utility-maximizing binary prediction, SPADES and mixture models, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, A unified approach to model selection and sparse recovery using regularized least squares, Learning without Concentration, On the sensitivity of the Lasso to the number of predictor variables, Forecasting functional time series, Statistical significance of the Netflix challenge, Adaptive Bayesian density regression for high-dimensional data, Partially linear additive quantile regression in ultra-high dimension, Leave-one-out cross-validation is risk consistent for Lasso, High-dimensional classification using features annealed independence rules, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Testing the homogeneity of risk differences with sparse count data, Lasso-type recovery of sparse representations for high-dimensional data, Some properties of generalized fused Lasso and its applications to high dimensional data, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Inverse Optimization with Noisy Data, Low-Rank Approximation and Completion of Positive Tensors, Sure independence screening in the presence of missing data, Oracle inequalities for weighted group Lasso in high-dimensional misspecified Cox models, Optimal linear discriminators for the discrete choice model in growing dimensions, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, On the Consistency of Bayesian Variable Selection for High Dimensional Binary Regression and Classification, High-dimensional additive modeling, Doubly penalized estimation in additive regression with high-dimensional data, Sparsistency and agnostic inference in sparse PCA, A computationally efficient approach to estimating species richness and rarefaction curve, Sure Independence Screening for Ultrahigh Dimensional Feature Space, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Asymptotic behavior of M-estimators of p regression parameters when \(p^ 2/n\) is large. I. Consistency
- The smallest eigenvalue of a large dimensional Wishart matrix
- Asymptotics in statistics: some basic concepts
- Statistical modeling: The two cultures. (With comments and a rejoinder).
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Functional aggregation for nonparametric regression.
- Least angle regression. (With discussion)
- The risk inflation criterion for multiple regression
- Atomic Decomposition by Basis Pursuit
- How Many Variables Should be Entered in a Regression Equation?
- Ideal spatial adaptation by wavelet shrinkage
- Efficient agnostic learning of neural networks with bounded fan-in