Simultaneous analysis of Lasso and Dantzig selector
From MaRDI portal
Publication:2388978
DOI10.1214/08-AOS620zbMath1173.62022arXiv0801.1095OpenAlexW2116581043WikidataQ100786247 ScholiaQ100786247MaRDI QIDQ2388978
Alexandre B. Tsybakov, Ya'acov Ritov, Peter J. Bickel
Publication date: 22 July 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the $ell_p$ estimation loss for $1le ple 2$ in the linear model when the number of variables can be much larger than the sample size.
Full work available at URL: https://arxiv.org/abs/0801.1095
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05) Prediction theory (aspects of stochastic processes) (60G25)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Dantzig selector and sparsity oracle inequalities
- Sparsity in penalized empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Functional aggregation for nonparametric regression.
- Asymptotics for Lasso-type estimators.
- Least angle regression. (With discussion)
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- Pathwise coordinate optimization
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- The Group Lasso for Logistic Regression
- A new approach to variable selection in least squares problems
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Sparse Density Estimation with ℓ1 Penalties
Related Items (only showing first 100 items - show all)
Conditional sure independence screening by conditional marginal empirical likelihood ⋮ High-dimensional tests for functional networks of brain anatomic regions ⋮ Variable selection and structure identification for varying coefficient Cox models ⋮ Regularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation study ⋮ Lasso for sparse linear regression with exponentially \(\beta\)-mixing errors ⋮ The degrees of freedom of partly smooth regularizers ⋮ Accuracy assessment for high-dimensional linear regression ⋮ L1-norm-based principal component analysis with adaptive regularization ⋮ Structure learning of sparse directed acyclic graphs incorporating the scale-free property ⋮ Predictor ranking and false discovery proportion control in high-dimensional regression ⋮ Inference for high-dimensional instrumental variables regression ⋮ Learning rates for partially linear functional models with high dimensional scalar covariates ⋮ A simple homotopy proximal mapping algorithm for compressive sensing ⋮ Scalable interpretable learning for multi-response error-in-variables regression ⋮ Prediction error after model search ⋮ \(\alpha\)-variational inference with statistical guarantees ⋮ Robust machine learning by median-of-means: theory and practice ⋮ Lasso guarantees for \(\beta \)-mixing heavy-tailed time series ⋮ Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators ⋮ Consistency of \(\ell_1\) penalized negative binomial regressions ⋮ Support union recovery in high-dimensional multivariate regression ⋮ \(\ell_1\)-penalized quantile regression in high-dimensional sparse models ⋮ Variable selection for sparse logistic regression ⋮ Multi-stage convex relaxation for feature selection ⋮ Detection of a sparse submatrix of a high-dimensional noisy matrix ⋮ RIPless compressed sensing from anisotropic measurements ⋮ On a unified view of nullspace-type conditions for recoveries associated with general sparsity structures ⋮ Calibrating nonconvex penalized regression in ultra-high dimension ⋮ Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors ⋮ Confidence sets in sparse regression ⋮ Model selection for high-dimensional linear regression with dependent observations ⋮ Convergence rates of variational posterior distributions ⋮ A general framework for Bayes structured linear models ⋮ Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator ⋮ Aggregation of affine estimators ⋮ Estimation and variable selection with exponential weights ⋮ Statistical inference in compound functional models ⋮ Estimation in the presence of many nuisance parameters: composite likelihood and plug-in likelihood ⋮ On Dantzig and Lasso estimators of the drift in a high dimensional Ornstein-Uhlenbeck model ⋮ Finite-sample analysis of \(M\)-estimators using self-concordance ⋮ Adaptive robust variable selection ⋮ Sparse identification of truncation errors ⋮ On the uniform convergence of empirical norms and inner products, with application to causal inference ⋮ A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al. ⋮ Finite sample performance of linear least squares estimation ⋮ Robust low-rank multiple kernel learning with compound regularization ⋮ Parallel integrative learning for large-scale multi-response regression with incomplete outcomes ⋮ Pivotal estimation via square-root lasso in nonparametric regression ⋮ Lasso with long memory regression errors ⋮ Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO ⋮ Robust dequantized compressive sensing ⋮ Sparse and efficient estimation for partial spline models with increasing dimension ⋮ Sparse semiparametric discriminant analysis ⋮ High-dimensional variable screening and bias in subsequent inference, with an empirical comparison ⋮ Sparse distance metric learning ⋮ Sparse trace norm regularization ⋮ A global homogeneity test for high-dimensional linear regression ⋮ Prediction error bounds for linear regression with the TREX ⋮ Boosting with structural sparsity: a differential inclusion approach ⋮ Prediction and estimation consistency of sparse multi-class penalized optimal scoring ⋮ Sorted concave penalized regression ⋮ Strong oracle optimality of folded concave penalized estimation ⋮ Endogeneity in high dimensions ⋮ Instrumental variables estimation with many weak instruments using regularized JIVE ⋮ Leave-one-out cross-validation is risk consistent for Lasso ⋮ Robust finite mixture regression for heterogeneous targets ⋮ On the differences between \(L_2\) boosting and the Lasso ⋮ QUADRO: a supervised dimension reduction method via Rayleigh quotient optimization ⋮ Structured analysis of the high-dimensional FMR model ⋮ Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator ⋮ Regularization methods for high-dimensional sparse control function models ⋮ High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking ⋮ Selective inference via marginal screening for high dimensional classification ⋮ Sharp oracle inequalities for low-complexity priors ⋮ Computational and statistical analyses for robust non-convex sparse regularized regression problem ⋮ Deviation inequalities for separately Lipschitz functionals of composition of random functions ⋮ Sparse Poisson regression with penalized weighted score function ⋮ Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation ⋮ On Lasso refitting strategies ⋮ Adaptively weighted group Lasso for semiparametric quantile regression models ⋮ On the asymptotic variance of the debiased Lasso ⋮ Posterior asymptotic normality for an individual coordinate in high-dimensional linear regression ⋮ High-dimensional generalized linear models incorporating graphical structure among predictors ⋮ Doubly penalized estimation in additive regression with high-dimensional data ⋮ Projected spline estimation of the nonparametric function in high-dimensional partially linear models for massive data ⋮ Joint feature screening for ultra-high-dimensional sparse additive hazards model by the sparsity-restricted pseudo-score estimator ⋮ Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming ⋮ Non-separable models with high-dimensional data ⋮ The Dantzig selector for a linear model of diffusion processes ⋮ Weaker regularity conditions and sparse recovery in high-dimensional regression ⋮ Structured estimation for the nonparametric Cox model ⋮ Stable recovery of low rank matrices from nuclear norm minimization ⋮ Minimax-optimal nonparametric regression in high dimensions ⋮ Near oracle performance and block analysis of signal space greedy methods ⋮ Lasso and probabilistic inequalities for multivariate point processes ⋮ Preconditioning the Lasso for sign consistency ⋮ Sparse learning via Boolean relaxations ⋮ High dimensional single index models ⋮ Innovated interaction screening for high-dimensional nonlinear classification ⋮ Sparse high-dimensional varying coefficient model: nonasymptotic minimax study
This page was built for publication: Simultaneous analysis of Lasso and Dantzig selector