Sparsity oracle inequalities for the Lasso

From MaRDI portal
Publication:2426799

DOI10.1214/07-EJS008zbMath1146.62028arXiv0705.3308WikidataQ105584245 ScholiaQ105584245MaRDI QIDQ2426799

Alexandre B. Tsybakov, Florentina Bunea, Marten H. Wegkamp

Publication date: 14 May 2008

Published in: Electronic Journal of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0705.3308



Related Items

Greedy algorithms for prediction, Regularity properties for sparse regression, LOL selection in high dimension, High dimensional regression for regenerative time-series: an application to road traffic modeling, Penalized logspline density estimation using total variation penalty, Adaptive log-density estimation, Some sharp performance bounds for least squares regression with \(L_1\) regularization, Near-ideal model selection by \(\ell _{1}\) minimization, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, Sparsity in penalized empirical risk minimization, Solution of linear ill-posed problems using overcomplete dictionaries, Estimation of matrices with row sparsity, Sharp MSE bounds for proximal denoising, Simultaneous analysis of Lasso and Dantzig selector, Bayesian factor-adjusted sparse regression, A note on the asymptotic distribution of lasso estimator for correlated data, Optimal equivariant prediction for high-dimensional linear models with arbitrary predictor covariance, Oracle inequalities for the lasso in the Cox model, Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap, Statistical significance in high-dimensional linear models, The Dantzig selector and sparsity oracle inequalities, Sparse recovery under matrix uncertainty, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Bayesian linear regression with sparse priors, \(\ell_{1}\)-penalization for mixture regression models, Lasso-type estimators for semiparametric nonlinear mixed-effects models estimation, High-dimensional generalized linear models and the lasso, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Aggregation of estimators and stochastic optimization, Adaptive Dantzig density estimation, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, Autoregressive process modeling via the Lasso procedure, Regularizers for structured sparsity, Multi-stage convex relaxation for feature selection, Sparse regression learning by aggregation and Langevin Monte-Carlo, On the asymptotic properties of the group lasso estimator for linear models, Lasso, iterative feature selection and the correlation selector: oracle inequalities and numerical performances, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, Thresholding-based iterative selection procedures for model selection and shrinkage, On the conditions used to prove oracle results for the Lasso, Sparse regression with exact clustering, PAC-Bayesian bounds for sparse regression estimation with exponential weights, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Least squares after model selection in high-dimensional sparse models, Mirror averaging with sparsity priors, Structured, sparse regression with application to HIV drug resistance, Sign-constrained least squares estimation for high-dimensional regression, A relaxed-PPA contraction method for sparse signal recovery, General nonexact oracle inequalities for classes with a subexponential envelope, Regularization for Cox's proportional hazards model with NP-dimensionality, Generalization of constraints for high dimensional regression problems, Oracle inequalities and optimal inference under group sparsity, Aggregation of affine estimators, A new perspective on least squares under convex constraint, \(L_1\)-penalization in functional linear regression with subgaussian design, The sparsity and bias of the LASSO selection in high-dimensional linear regression, On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property, Oracle inequalities for high-dimensional prediction, Cox process functional learning, Parallel integrative learning for large-scale multi-response regression with incomplete outcomes, Decomposable norm minimization with proximal-gradient homotopy algorithm, Additive model selection, Overcoming the limitations of phase transition by higher order analysis of regularization techniques, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Pivotal estimation via square-root lasso in nonparametric regression, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Exponential screening and optimal rates of sparse estimation, Performance guarantees for individualized treatment rules, Least angle and \(\ell _{1}\) penalized regression: a review, On asymptotically optimal confidence regions and tests for high-dimensional models, Lasso Inference for High-Dimensional Time Series, High-dimensional Gaussian model selection on a Gaussian design, Nearly unbiased variable selection under minimax concave penalty, Variable selection in nonparametric additive models, SPADES and mixture models, Distribution-Free Predictive Inference For Regression, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Selection by partitioning the solution paths, On the sensitivity of the Lasso to the number of predictor variables, Lasso-type recovery of sparse representations for high-dimensional data, Generalized mirror averaging and \(D\)-convex aggregation, Some theoretical results on the grouped variables Lasso, On the exponentially weighted aggregate with the Laplace prior, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Sparse recovery in convex hulls via entropy penalization, Aggregation using input-output trade-off, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, Feature selection for data integration with mixed multiview data, High-dimensional additive modeling, Doubly penalized estimation in additive regression with high-dimensional data, Structured estimation for the nonparametric Cox model, Multivariate sparse Laplacian shrinkage for joint estimation of two graphical structures, Lasso and probabilistic inequalities for multivariate point processes, Penalized polygram regression, Sparse high-dimensional varying coefficient model: nonasymptotic minimax study, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, Classification of longitudinal data through a semiparametric mixed‐effects model based on lasso‐type estimators, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, A component lasso, Unnamed Item, Sparsest representations and approximations of an underdetermined linear system, Penalised robust estimators for sparse and high-dimensional linear models, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Targeted Inference Involving High-Dimensional Data Using Nuisance Penalized Regression, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, An alternative to synthetic control for models with many covariates under sparsity, Recovery of partly sparse and dense signals, On the finite-sample analysis of \(\Theta\)-estimators, Estimation and variable selection with exponential weights, Oracle Inequalities for Local and Global Empirical Risk Minimizers, On the finite-sample analysis of \(\Theta\)-estimators, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Consistent parameter estimation for Lasso and approximate message passing, The Lasso for High Dimensional Regression with a Possible Change Point, Ridge regression and asymptotic minimax estimation over spheres of growing dimension, Leave-one-out cross-validation is risk consistent for Lasso, Quasi-likelihood and/or robust estimation in high dimensions, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, A study on tuning parameter selection for the high-dimensional lasso, Weak Convergence of the Regularization Path in Penalized M-Estimation, The Adaptive Gril Estimator with a Diverging Number of Parameters, Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space, Variance estimation based on blocked 3×2 cross-validation in high-dimensional linear regression, Univariate measurement error selection likelihood for variable selection of additive model, Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory