Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint

From MaRDI portal
Revision as of 15:27, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:869974

DOI10.1214/009053606000000768zbMath1106.62022arXivmath/0702684OpenAlexW3104950855WikidataQ105584233 ScholiaQ105584233MaRDI QIDQ869974

Eitan Greenshtein

Publication date: 12 March 2007

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0702684






Related Items (43)

Greedy algorithms for predictionBest subset selection via a modern optimization lensNear-ideal model selection by \(\ell _{1}\) minimizationProperties and refinements of the fused LassoBest subset binary predictionRegularization in statistics\(\ell _{1}\)-regularized linear regression: persistence and oracle inequalitiesSample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the LassoDifference-of-Convex Learning: Directional Stationarity, Optimality, and SparsityApproximation of functions of few variables in high dimensionsConstrained optimization for stratified treatment rules in reducing hospital readmission rates of diabetic patientsHigh-dimensional generalized linear models and the lassoGene selection and prediction for cancer classification using support vector machines with a reject optionSharp support recovery from noisy random measurements by \(\ell_1\)-minimizationComplexity of approximation of functions of few variables in high dimensionsOn the asymptotic properties of the group lasso estimator for linear modelsHonest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalizationThe log-linear group-lasso estimator and its asymptotic propertiesKullback-Leibler aggregation and misspecified generalized linear modelsUnnamed ItemBayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densitiesGibbs posterior for variable selection in high-dimensional classification and data miningNonparametric time series forecasting with dynamic updatingConfidence Intervals for Low Dimensional Parameters in High Dimensional Linear ModelsVariable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applicationsRISK MINIMIZATION FOR TIME SERIES BINARY CHOICE WITH VARIABLE SELECTIONModel selection in utility-maximizing binary predictionLearning without ConcentrationOn the sensitivity of the Lasso to the number of predictor variablesForecasting functional time seriesHigh-dimensional classification using features annealed independence rulesMathematical programming for simultaneous feature selection and outlier detection under l1 normOR Forum—An Algorithmic Approach to Linear RegressionGraphical-model based high dimensional generalized linear modelsThe statistical rate for support matrix machines under low rankness and row (column) sparsityElastic-net regularization in learning theoryFast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization AlgorithmsSure Independence Screening for Ultrahigh Dimensional Feature SpaceUnnamed ItemSparse regression at scale: branch-and-bound rooted in first-order optimizationGenetic Algorithm in the Wavelet Domain for LargepSmallnRegressionUnnamed ItemOn two continuum armed bandit problems in high dimensions


Uses Software



Cites Work




This page was built for publication: Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint