Simultaneous analysis of Lasso and Dantzig selector

From MaRDI portal
Revision as of 19:57, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2388978

DOI10.1214/08-AOS620zbMath1173.62022arXiv0801.1095OpenAlexW2116581043WikidataQ100786247 ScholiaQ100786247MaRDI QIDQ2388978

Alexandre B. Tsybakov, Ya'acov Ritov, Peter J. Bickel

Publication date: 22 July 2009

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the $ell_p$ estimation loss for $1le ple 2$ in the linear model when the number of variables can be much larger than the sample size.


Full work available at URL: https://arxiv.org/abs/0801.1095





Cites Work


Related Items (only showing first 100 items - show all)

Conditional sure independence screening by conditional marginal empirical likelihoodHigh-dimensional tests for functional networks of brain anatomic regionsVariable selection and structure identification for varying coefficient Cox modelsRegularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation studyLasso for sparse linear regression with exponentially \(\beta\)-mixing errorsThe degrees of freedom of partly smooth regularizersAccuracy assessment for high-dimensional linear regressionL1-norm-based principal component analysis with adaptive regularizationStructure learning of sparse directed acyclic graphs incorporating the scale-free propertyPredictor ranking and false discovery proportion control in high-dimensional regressionInference for high-dimensional instrumental variables regressionLearning rates for partially linear functional models with high dimensional scalar covariatesA simple homotopy proximal mapping algorithm for compressive sensingScalable interpretable learning for multi-response error-in-variables regressionPrediction error after model search\(\alpha\)-variational inference with statistical guaranteesRobust machine learning by median-of-means: theory and practiceLasso guarantees for \(\beta \)-mixing heavy-tailed time seriesSup-norm convergence rate and sign concentration property of Lasso and Dantzig estimatorsConsistency of \(\ell_1\) penalized negative binomial regressionsSupport union recovery in high-dimensional multivariate regression\(\ell_1\)-penalized quantile regression in high-dimensional sparse modelsVariable selection for sparse logistic regressionMulti-stage convex relaxation for feature selectionDetection of a sparse submatrix of a high-dimensional noisy matrixRIPless compressed sensing from anisotropic measurementsOn a unified view of nullspace-type conditions for recoveries associated with general sparsity structuresCalibrating nonconvex penalized regression in ultra-high dimensionGaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectorsConfidence sets in sparse regressionModel selection for high-dimensional linear regression with dependent observationsConvergence rates of variational posterior distributionsA general framework for Bayes structured linear modelsAsymptotic risk and phase transition of \(l_1\)-penalized robust estimatorAggregation of affine estimatorsEstimation and variable selection with exponential weightsStatistical inference in compound functional modelsEstimation in the presence of many nuisance parameters: composite likelihood and plug-in likelihoodOn Dantzig and Lasso estimators of the drift in a high dimensional Ornstein-Uhlenbeck modelFinite-sample analysis of \(M\)-estimators using self-concordanceAdaptive robust variable selectionSparse identification of truncation errorsOn the uniform convergence of empirical norms and inner products, with application to causal inferenceA look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al.Finite sample performance of linear least squares estimationRobust low-rank multiple kernel learning with compound regularizationParallel integrative learning for large-scale multi-response regression with incomplete outcomesPivotal estimation via square-root lasso in nonparametric regressionLasso with long memory regression errorsSparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSORobust dequantized compressive sensingSparse and efficient estimation for partial spline models with increasing dimensionSparse semiparametric discriminant analysisHigh-dimensional variable screening and bias in subsequent inference, with an empirical comparisonSparse distance metric learningSparse trace norm regularizationA global homogeneity test for high-dimensional linear regressionPrediction error bounds for linear regression with the TREXBoosting with structural sparsity: a differential inclusion approachPrediction and estimation consistency of sparse multi-class penalized optimal scoringSorted concave penalized regressionStrong oracle optimality of folded concave penalized estimationEndogeneity in high dimensionsInstrumental variables estimation with many weak instruments using regularized JIVELeave-one-out cross-validation is risk consistent for LassoRobust finite mixture regression for heterogeneous targetsOn the differences between \(L_2\) boosting and the LassoQUADRO: a supervised dimension reduction method via Rayleigh quotient optimizationStructured analysis of the high-dimensional FMR modelVariable selection with spatially autoregressive errors: a generalized moments Lasso estimatorRegularization methods for high-dimensional sparse control function modelsHigh-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and rankingSelective inference via marginal screening for high dimensional classificationSharp oracle inequalities for low-complexity priorsComputational and statistical analyses for robust non-convex sparse regularized regression problemDeviation inequalities for separately Lipschitz functionals of composition of random functionsSparse Poisson regression with penalized weighted score functionLocalized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregationOn Lasso refitting strategiesAdaptively weighted group Lasso for semiparametric quantile regression modelsOn the asymptotic variance of the debiased LassoPosterior asymptotic normality for an individual coordinate in high-dimensional linear regressionHigh-dimensional generalized linear models incorporating graphical structure among predictorsDoubly penalized estimation in additive regression with high-dimensional dataProjected spline estimation of the nonparametric function in high-dimensional partially linear models for massive dataJoint feature screening for ultra-high-dimensional sparse additive hazards model by the sparsity-restricted pseudo-score estimatorSample average approximation with sparsity-inducing penalty for high-dimensional stochastic programmingNon-separable models with high-dimensional dataThe Dantzig selector for a linear model of diffusion processesWeaker regularity conditions and sparse recovery in high-dimensional regressionStructured estimation for the nonparametric Cox modelStable recovery of low rank matrices from nuclear norm minimizationMinimax-optimal nonparametric regression in high dimensionsNear oracle performance and block analysis of signal space greedy methodsLasso and probabilistic inequalities for multivariate point processesPreconditioning the Lasso for sign consistencySparse learning via Boolean relaxationsHigh dimensional single index modelsInnovated interaction screening for high-dimensional nonlinear classificationSparse high-dimensional varying coefficient model: nonasymptotic minimax study





This page was built for publication: Simultaneous analysis of Lasso and Dantzig selector