Sparsity in penalized empirical risk minimization

From MaRDI portal
Publication:838303

DOI10.1214/07-AIHP146zbMath1168.62044OpenAlexW1972968086WikidataQ105584266 ScholiaQ105584266MaRDI QIDQ838303

Vladimir I. Koltchinskii

Publication date: 24 August 2009

Published in: Annales de l'Institut Henri Poincaré. Probabilités et Statistiques (Search for Journal in Brave)

Full work available at URL: https://eudml.org/doc/78023



Related Items

Some sharp performance bounds for least squares regression with \(L_1\) regularization, Simultaneous analysis of Lasso and Dantzig selector, Statistical significance in high-dimensional linear models, The Dantzig selector and sparsity oracle inequalities, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Sparsity in multiple kernel learning, Analysis of sparse recovery for Legendre expansions using envelope bound, Regularized learning schemes in feature Banach spaces, Support Recovery and Parameter Identification of Multivariate ARMA Systems with Exogenous Inputs, Optimal Algorithms for Stochastic Complementary Composite Minimization, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, Multi-stage convex relaxation for feature selection, High-dimensional additive hazards models and the lasso, The Lasso problem and uniqueness, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, On the conditions used to prove oracle results for the Lasso, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Sparsity considerations for dependent variables, Least squares after model selection in high-dimensional sparse models, Mirror averaging with sparsity priors, General nonexact oracle inequalities for classes with a subexponential envelope, Error bounds for \(l^p\)-norm multiple kernel learning with least square loss, Support vector machines with a reject option, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, \(L_1\)-penalization in functional linear regression with subgaussian design, Consistent learning by composite proximal thresholding, Overcoming the limitations of phase transition by higher order analysis of regularization techniques, Pivotal estimation via square-root lasso in nonparametric regression, Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Exponential screening and optimal rates of sparse estimation, Performance guarantees for individualized treatment rules, SPADES and mixture models, Quasi-likelihood and/or robust estimation in high dimensions, Some theoretical results on the grouped variables Lasso, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Generalized support vector regression: Duality and tensor-kernel representation, Sparse recovery in convex hulls via entropy penalization, Elastic-net regularization in learning theory, Simulation-based Value-at-Risk for nonlinear portfolios, Sparse parameter identification of stochastic dynamical systems, The Partial Linear Model in High Dimensions



Cites Work