Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint

From MaRDI portal
Publication:869974

DOI10.1214/009053606000000768zbMath1106.62022arXivmath/0702684OpenAlexW3104950855WikidataQ105584233 ScholiaQ105584233MaRDI QIDQ869974

Eitan Greenshtein

Publication date: 12 March 2007

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0702684



Related Items

Greedy algorithms for prediction, Best subset selection via a modern optimization lens, Near-ideal model selection by \(\ell _{1}\) minimization, Properties and refinements of the fused Lasso, Best subset binary prediction, Regularization in statistics, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity, Approximation of functions of few variables in high dimensions, Constrained optimization for stratified treatment rules in reducing hospital readmission rates of diabetic patients, High-dimensional generalized linear models and the lasso, Gene selection and prediction for cancer classification using support vector machines with a reject option, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Complexity of approximation of functions of few variables in high dimensions, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, The log-linear group-lasso estimator and its asymptotic properties, Kullback-Leibler aggregation and misspecified generalized linear models, Unnamed Item, Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities, Gibbs posterior for variable selection in high-dimensional classification and data mining, Nonparametric time series forecasting with dynamic updating, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications, RISK MINIMIZATION FOR TIME SERIES BINARY CHOICE WITH VARIABLE SELECTION, Model selection in utility-maximizing binary prediction, Learning without Concentration, On the sensitivity of the Lasso to the number of predictor variables, Forecasting functional time series, High-dimensional classification using features annealed independence rules, OR Forum—An Algorithmic Approach to Linear Regression, Graphical-model based high dimensional generalized linear models, Elastic-net regularization in learning theory, Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms, Sure Independence Screening for Ultrahigh Dimensional Feature Space, Unnamed Item, Sparse regression at scale: branch-and-bound rooted in first-order optimization, Genetic Algorithm in the Wavelet Domain for LargepSmallnRegression, Unnamed Item, On two continuum armed bandit problems in high dimensions


Uses Software


Cites Work