Sparsity oracle inequalities for the Lasso
From MaRDI portal
Publication:2426799
DOI10.1214/07-EJS008zbMath1146.62028arXiv0705.3308WikidataQ105584245 ScholiaQ105584245MaRDI QIDQ2426799
Alexandre B. Tsybakov, Florentina Bunea, Marten H. Wegkamp
Publication date: 14 May 2008
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0705.3308
Related Items (only showing first 100 items - show all)
Greedy algorithms for prediction ⋮ Regularity properties for sparse regression ⋮ LOL selection in high dimension ⋮ High dimensional regression for regenerative time-series: an application to road traffic modeling ⋮ Penalized logspline density estimation using total variation penalty ⋮ Adaptive log-density estimation ⋮ Some sharp performance bounds for least squares regression with \(L_1\) regularization ⋮ Near-ideal model selection by \(\ell _{1}\) minimization ⋮ Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model ⋮ Sparsity in penalized empirical risk minimization ⋮ Solution of linear ill-posed problems using overcomplete dictionaries ⋮ Estimation of matrices with row sparsity ⋮ Sharp MSE bounds for proximal denoising ⋮ Simultaneous analysis of Lasso and Dantzig selector ⋮ Bayesian factor-adjusted sparse regression ⋮ A note on the asymptotic distribution of lasso estimator for correlated data ⋮ Optimal equivariant prediction for high-dimensional linear models with arbitrary predictor covariance ⋮ Oracle inequalities for the lasso in the Cox model ⋮ Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap ⋮ Statistical significance in high-dimensional linear models ⋮ The Dantzig selector and sparsity oracle inequalities ⋮ Sparse recovery under matrix uncertainty ⋮ Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression ⋮ Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization ⋮ \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities ⋮ Bayesian linear regression with sparse priors ⋮ \(\ell_{1}\)-penalization for mixture regression models ⋮ Lasso-type estimators for semiparametric nonlinear mixed-effects models estimation ⋮ High-dimensional generalized linear models and the lasso ⋮ Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators ⋮ Aggregation of estimators and stochastic optimization ⋮ Adaptive Dantzig density estimation ⋮ \(\ell_1\)-penalized quantile regression in high-dimensional sparse models ⋮ Autoregressive process modeling via the Lasso procedure ⋮ Regularizers for structured sparsity ⋮ Multi-stage convex relaxation for feature selection ⋮ Sparse regression learning by aggregation and Langevin Monte-Carlo ⋮ On the asymptotic properties of the group lasso estimator for linear models ⋮ Lasso, iterative feature selection and the correlation selector: oracle inequalities and numerical performances ⋮ Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization ⋮ Thresholding-based iterative selection procedures for model selection and shrinkage ⋮ On the conditions used to prove oracle results for the Lasso ⋮ Sparse regression with exact clustering ⋮ PAC-Bayesian bounds for sparse regression estimation with exponential weights ⋮ The Lasso as an \(\ell _{1}\)-ball model selection procedure ⋮ The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso) ⋮ The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods ⋮ Least squares after model selection in high-dimensional sparse models ⋮ Mirror averaging with sparsity priors ⋮ Structured, sparse regression with application to HIV drug resistance ⋮ Sign-constrained least squares estimation for high-dimensional regression ⋮ A relaxed-PPA contraction method for sparse signal recovery ⋮ General nonexact oracle inequalities for classes with a subexponential envelope ⋮ Regularization for Cox's proportional hazards model with NP-dimensionality ⋮ Generalization of constraints for high dimensional regression problems ⋮ Oracle inequalities and optimal inference under group sparsity ⋮ Aggregation of affine estimators ⋮ A new perspective on least squares under convex constraint ⋮ \(L_1\)-penalization in functional linear regression with subgaussian design ⋮ The sparsity and bias of the LASSO selection in high-dimensional linear regression ⋮ On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property ⋮ Oracle inequalities for high-dimensional prediction ⋮ Cox process functional learning ⋮ Parallel integrative learning for large-scale multi-response regression with incomplete outcomes ⋮ Decomposable norm minimization with proximal-gradient homotopy algorithm ⋮ Additive model selection ⋮ Overcoming the limitations of phase transition by higher order analysis of regularization techniques ⋮ I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error ⋮ Pivotal estimation via square-root lasso in nonparametric regression ⋮ High-dimensional variable screening and bias in subsequent inference, with an empirical comparison ⋮ Exponential screening and optimal rates of sparse estimation ⋮ Performance guarantees for individualized treatment rules ⋮ Least angle and \(\ell _{1}\) penalized regression: a review ⋮ On asymptotically optimal confidence regions and tests for high-dimensional models ⋮ Lasso Inference for High-Dimensional Time Series ⋮ High-dimensional Gaussian model selection on a Gaussian design ⋮ Nearly unbiased variable selection under minimax concave penalty ⋮ Variable selection in nonparametric additive models ⋮ SPADES and mixture models ⋮ Distribution-Free Predictive Inference For Regression ⋮ Prediction and estimation consistency of sparse multi-class penalized optimal scoring ⋮ Selection by partitioning the solution paths ⋮ On the sensitivity of the Lasso to the number of predictor variables ⋮ Lasso-type recovery of sparse representations for high-dimensional data ⋮ Generalized mirror averaging and \(D\)-convex aggregation ⋮ Some theoretical results on the grouped variables Lasso ⋮ On the exponentially weighted aggregate with the Laplace prior ⋮ Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity ⋮ Sparse recovery in convex hulls via entropy penalization ⋮ Aggregation using input-output trade-off ⋮ Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning ⋮ Feature selection for data integration with mixed multiview data ⋮ High-dimensional additive modeling ⋮ Doubly penalized estimation in additive regression with high-dimensional data ⋮ Structured estimation for the nonparametric Cox model ⋮ Multivariate sparse Laplacian shrinkage for joint estimation of two graphical structures ⋮ Lasso and probabilistic inequalities for multivariate point processes ⋮ Penalized polygram regression ⋮ Sparse high-dimensional varying coefficient model: nonasymptotic minimax study ⋮ Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
This page was built for publication: Sparsity oracle inequalities for the Lasso