Sparsity in penalized empirical risk minimization

From MaRDI portal
Revision as of 14:05, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:838303

DOI10.1214/07-AIHP146zbMath1168.62044OpenAlexW1972968086WikidataQ105584266 ScholiaQ105584266MaRDI QIDQ838303

Vladimir I. Koltchinskii

Publication date: 24 August 2009

Published in: Annales de l'Institut Henri Poincaré. Probabilités et Statistiques (Search for Journal in Brave)

Full work available at URL: https://eudml.org/doc/78023




Related Items (47)

Some sharp performance bounds for least squares regression with \(L_1\) regularizationSimultaneous analysis of Lasso and Dantzig selectorStatistical significance in high-dimensional linear modelsThe Dantzig selector and sparsity oracle inequalitiesThe learning rate of \(l_2\)-coefficient regularized classification with strong loss\(\ell _{1}\)-regularized linear regression: persistence and oracle inequalitiesSample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the LassoSparsity in multiple kernel learningAnalysis of sparse recovery for Legendre expansions using envelope boundRegularized learning schemes in feature Banach spacesSupport Recovery and Parameter Identification of Multivariate ARMA Systems with Exogenous InputsOptimal Algorithms for Stochastic Complementary Composite MinimizationSup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators\(\ell_1\)-penalized quantile regression in high-dimensional sparse modelsMulti-stage convex relaxation for feature selectionHigh-dimensional additive hazards models and the lassoThe Lasso problem and uniquenessOn the asymptotic properties of the group lasso estimator for linear modelsHonest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalizationOn the conditions used to prove oracle results for the LassoThe Lasso as an \(\ell _{1}\)-ball model selection procedureThe adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)Sparsity considerations for dependent variablesLeast squares after model selection in high-dimensional sparse modelsMirror averaging with sparsity priorsGeneral nonexact oracle inequalities for classes with a subexponential envelopeError bounds for \(l^p\)-norm multiple kernel learning with least square lossSupport vector machines with a reject optionOptimal computational and statistical rates of convergence for sparse nonconvex learning problems\(L_1\)-penalization in functional linear regression with subgaussian designConsistent learning by composite proximal thresholdingOvercoming the limitations of phase transition by higher order analysis of regularization techniquesPivotal estimation via square-root lasso in nonparametric regressionOptimal learning rates of \(l^p\)-type multiple kernel learning under general conditionsHigh-dimensional variable screening and bias in subsequent inference, with an empirical comparisonExponential screening and optimal rates of sparse estimationPerformance guarantees for individualized treatment rulesSPADES and mixture modelsQuasi-likelihood and/or robust estimation in high dimensionsSome theoretical results on the grouped variables LassoAggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsityGeneralized support vector regression: Duality and tensor-kernel representationSparse recovery in convex hulls via entropy penalizationElastic-net regularization in learning theorySimulation-based Value-at-Risk for nonlinear portfoliosSparse parameter identification of stochastic dynamical systemsThe Partial Linear Model in High Dimensions




Cites Work




This page was built for publication: Sparsity in penalized empirical risk minimization