Aggregation and Sparsity Via ℓ1 Penalized Least Squares
From MaRDI portal
Publication:5307581
DOI10.1007/11776420_29zbMath1143.62319OpenAlexW1549451259WikidataQ105584913 ScholiaQ105584913MaRDI QIDQ5307581
Florentina Bunea, Alexandre B. Tsybakov, Marten H. Wegkamp
Publication date: 14 September 2007
Published in: Learning Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/11776420_29
Related Items
Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, Classification of longitudinal data through a semiparametric mixed‐effects model based on lasso‐type estimators, Simultaneous analysis of Lasso and Dantzig selector, A general procedure to combine estimators, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Lasso-type estimators for semiparametric nonlinear mixed-effects models estimation, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, High-dimensional generalized linear models and the lasso, Aggregation of estimators and stochastic optimization, Adaptive Dantzig density estimation, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, PAC-Bayesian estimation and prediction in sparse additive models, Sparse regression learning by aggregation and Langevin Monte-Carlo, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Least squares after model selection in high-dimensional sparse models, Mirror averaging with sparsity priors, Parametric or nonparametric? A parametricness index for model selection, Sparse trace norm regularization, Prediction error bounds for linear regression with the TREX, SPADES and mixture models, On the sensitivity of the Lasso to the number of predictor variables, Quasi-likelihood and/or robust estimation in high dimensions, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Elastic-net regularization in learning theory, Aggregation using input-output trade-off, Sharp Oracle Inequalities for Square Root Regularization, High-dimensional additive modeling, Lasso and probabilistic inequalities for multivariate point processes, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates