Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
From MaRDI portal
Publication:869974
DOI10.1214/009053606000000768zbMath1106.62022arXivmath/0702684WikidataQ105584233 ScholiaQ105584233MaRDI QIDQ869974
Publication date: 12 March 2007
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0702684
Related Items
Near-ideal model selection by \(\ell _{1}\) minimization, Properties and refinements of the fused Lasso, Regularization in statistics, Gibbs posterior for variable selection in high-dimensional classification and data mining, Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications, High-dimensional classification using features annealed independence rules, Elastic-net regularization in learning theory, High-dimensional generalized linear models and the lasso, Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities
Uses Software
Cites Work
- Asymptotic behavior of M-estimators of p regression parameters when \(p^ 2/n\) is large. I. Consistency
- Asymptotic behavior of M-estimators for the linear model
- Statistical modeling: The two cultures. (With comments and a rejoinder).
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Functional aggregation for nonparametric regression.
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- Population theory for boosting ensembles.
- On the Bayes-risk consistency of regularized boosting methods.
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- DNA Microarray Experiments: Biological and Technological Aspects
- Efficient agnostic learning of neural networks with bounded fan-in
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- The elements of statistical learning. Data mining, inference, and prediction
- Discussion on boosting papers.
- Discussion on boosting papers.
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item