Sparsity in penalized empirical risk minimization
From MaRDI portal
Publication:838303
DOI10.1214/07-AIHP146zbMath1168.62044OpenAlexW1972968086WikidataQ105584266 ScholiaQ105584266MaRDI QIDQ838303
Publication date: 24 August 2009
Published in: Annales de l'Institut Henri Poincaré. Probabilités et Statistiques (Search for Journal in Brave)
Full work available at URL: https://eudml.org/doc/78023
sparsityoracle inequalitiesempirical riskRademacher processes\(\ell_p\)-penaltypenalized empirical risk
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Inequalities; stochastic orderings (60E15) Linear inference, regression (62J99) Order statistics; empirical distribution functions (62G30) Nonparametric inference (62G99)
Related Items
Some sharp performance bounds for least squares regression with \(L_1\) regularization, Simultaneous analysis of Lasso and Dantzig selector, Statistical significance in high-dimensional linear models, The Dantzig selector and sparsity oracle inequalities, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Sparsity in multiple kernel learning, Analysis of sparse recovery for Legendre expansions using envelope bound, Regularized learning schemes in feature Banach spaces, Support Recovery and Parameter Identification of Multivariate ARMA Systems with Exogenous Inputs, Optimal Algorithms for Stochastic Complementary Composite Minimization, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, Multi-stage convex relaxation for feature selection, High-dimensional additive hazards models and the lasso, The Lasso problem and uniqueness, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, On the conditions used to prove oracle results for the Lasso, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Sparsity considerations for dependent variables, Least squares after model selection in high-dimensional sparse models, Mirror averaging with sparsity priors, General nonexact oracle inequalities for classes with a subexponential envelope, Error bounds for \(l^p\)-norm multiple kernel learning with least square loss, Support vector machines with a reject option, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, \(L_1\)-penalization in functional linear regression with subgaussian design, Consistent learning by composite proximal thresholding, Overcoming the limitations of phase transition by higher order analysis of regularization techniques, Pivotal estimation via square-root lasso in nonparametric regression, Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Exponential screening and optimal rates of sparse estimation, Performance guarantees for individualized treatment rules, SPADES and mixture models, Quasi-likelihood and/or robust estimation in high dimensions, Some theoretical results on the grouped variables Lasso, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Generalized support vector regression: Duality and tensor-kernel representation, Sparse recovery in convex hulls via entropy penalization, Elastic-net regularization in learning theory, Simulation-based Value-at-Risk for nonlinear portfolios, Sparse parameter identification of stochastic dynamical systems, The Partial Linear Model in High Dimensions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Risk bounds for model selection via penalization
- Aggregating regression procedures to improve performance
- Mixing strategies for density estimation.
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Weak convergence and empirical processes. With applications to statistics
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- High-dimensional graphs and variable selection with the Lasso
- Complexities of convex combinations and bounding the generalization error in classification
- Local Rademacher complexities
- Lectures on Modern Convex Optimization
- Stable recovery of sparse overcomplete representations in the presence of noise
- Learning Theory and Kernel Machines
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Stable signal recovery from incomplete and inaccurate measurements
- Compressed sensing
- Some applications of concentration inequalities to statistics