Some sharp performance bounds for least squares regression with \(L_1\) regularization

From MaRDI portal
Publication:834334

DOI10.1214/08-AOS659zbMath1173.62029arXiv0908.2869OpenAlexW3104148512WikidataQ105584240 ScholiaQ105584240MaRDI QIDQ834334

N. E. Zubov

Publication date: 19 August 2009

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0908.2869



Related Items

Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems, Statistical consistency of coefficient-based conditional quantile regression, Near-ideal model selection by \(\ell _{1}\) minimization, Variable Selection With Second-Generation P-Values, DC approximation approaches for sparse optimization, THE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTION, \(l_{0}\)-norm based structural sparse least square regression for feature selection, Sparse recovery under matrix uncertainty, An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, \(\ell_{1}\)-penalization for mixture regression models, High-Dimensional Gaussian Graphical Regression Models with Covariates, A simple homotopy proximal mapping algorithm for compressive sensing, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Unnamed Item, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Multi-stage convex relaxation for feature selection, On the conditions used to prove oracle results for the Lasso, Self-concordant analysis for logistic regression, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Mirror averaging with sparsity priors, General nonexact oracle inequalities for classes with a subexponential envelope, Which bridge estimator is the best for variable selection?, A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery, A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, An efficient adaptive forward-backward selection method for sparse polynomial chaos expansion, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Two-stage convex relaxation approach to low-rank and sparsity regularized least squares loss, Consistent parameter estimation for Lasso and approximate message passing, Tuning parameter selection for the adaptive LASSO in the autoregressive model, Robust dequantized compressive sensing, Sparse trace norm regularization, Exponential screening and optimal rates of sparse estimation, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, Greedy variance estimation for the LASSO, Unnamed Item, The benefit of group sparsity, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Strong oracle optimality of folded concave penalized estimation, Structured sparsity through convex optimization, A selective review of group selection in high-dimensional models, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, DC Approximation Approach for ℓ0-minimization in Compressed Sensing, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, A hybrid Bregman alternating direction method of multipliers for the linearly constrained difference-of-convex problems, Selection Consistency of Generalized Information Criterion for Sparse Logistic Model, A general double-proximal gradient algorithm for d.c. programming, Thresholded spectral algorithms for sparse approximations



Cites Work