Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
From MaRDI portal
Publication:869974
DOI10.1214/009053606000000768zbMath1106.62022arXivmath/0702684OpenAlexW3104950855WikidataQ105584233 ScholiaQ105584233MaRDI QIDQ869974
Publication date: 12 March 2007
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0702684
Related Items
Greedy algorithms for prediction, Best subset selection via a modern optimization lens, Near-ideal model selection by \(\ell _{1}\) minimization, Properties and refinements of the fused Lasso, Best subset binary prediction, Regularization in statistics, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity, Approximation of functions of few variables in high dimensions, Constrained optimization for stratified treatment rules in reducing hospital readmission rates of diabetic patients, High-dimensional generalized linear models and the lasso, Gene selection and prediction for cancer classification using support vector machines with a reject option, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Complexity of approximation of functions of few variables in high dimensions, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, The log-linear group-lasso estimator and its asymptotic properties, Kullback-Leibler aggregation and misspecified generalized linear models, Unnamed Item, Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities, Gibbs posterior for variable selection in high-dimensional classification and data mining, Nonparametric time series forecasting with dynamic updating, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications, RISK MINIMIZATION FOR TIME SERIES BINARY CHOICE WITH VARIABLE SELECTION, Model selection in utility-maximizing binary prediction, Learning without Concentration, On the sensitivity of the Lasso to the number of predictor variables, Forecasting functional time series, High-dimensional classification using features annealed independence rules, OR Forum—An Algorithmic Approach to Linear Regression, Graphical-model based high dimensional generalized linear models, Elastic-net regularization in learning theory, Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms, Sure Independence Screening for Ultrahigh Dimensional Feature Space, Unnamed Item, Sparse regression at scale: branch-and-bound rooted in first-order optimization, Genetic Algorithm in the Wavelet Domain for LargepSmallnRegression, Unnamed Item, On two continuum armed bandit problems in high dimensions
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Asymptotic behavior of M-estimators of p regression parameters when \(p^ 2/n\) is large. I. Consistency
- Asymptotic behavior of M-estimators for the linear model
- Statistical modeling: The two cultures. (With comments and a rejoinder).
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Functional aggregation for nonparametric regression.
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- Population theory for boosting ensembles.
- On the Bayes-risk consistency of regularized boosting methods.
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- DNA Microarray Experiments: Biological and Technological Aspects
- Efficient agnostic learning of neural networks with bounded fan-in
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- The elements of statistical learning. Data mining, inference, and prediction
- Discussion on boosting papers.
- Discussion on boosting papers.