Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators

From MaRDI portal
Revision as of 21:20, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2426826

DOI10.1214/08-EJS177zbMath1306.62155arXiv0801.4610OpenAlexW3100190044WikidataQ105584246 ScholiaQ105584246MaRDI QIDQ2426826

Karim Lounici

Publication date: 14 May 2008

Published in: Electronic Journal of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0801.4610




Related Items (56)

Worst possible sub-directions in high-dimensional modelsNonnegative-Lasso and application in index trackingIterative algorithm for discrete structure recoveryDe-biasing the Lasso with degrees-of-freedom adjustmentSliding window strategy for convolutional spike sorting with Lasso. Algorithm, theoretical guarantees and complexityEstimation of matrices with row sparsityExtensions of stability selection using subsamples of observations and covariatesStrong consistency of Lasso estimatorsSparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticitySparse recovery under matrix uncertaintyAsymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regressionNon-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization\(\ell _{1}\)-regularized linear regression: persistence and oracle inequalitiesBayesian linear regression with sparse priorsHigh-dimensional covariance matrix estimation with missing observationsSample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the LassoVariable selection, monotone likelihood ratio and group sparsityA Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among CovariatesRobust machine learning by median-of-means: theory and practiceAdaptive Dantzig density estimationVariable selection and regularization via arbitrary rectangle-range generalized elastic netAsymptotic theory in network models with covariates and a growing number of node parametersRegularizers for structured sparsityMulti-stage convex relaxation for feature selectionMinimax risks for sparse regressions: ultra-high dimensional phenomenonsOn the asymptotic properties of the group lasso estimator for linear modelsHonest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalizationOn the conditions used to prove oracle results for the LassoPAC-Bayesian bounds for sparse regression estimation with exponential weightsThe adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methodsLeast squares after model selection in high-dimensional sparse modelsCalibrating nonconvex penalized regression in ultra-high dimensionTransductive versions of the Lasso and the Dantzig selectorGeneral nonexact oracle inequalities for classes with a subexponential envelopeGeneralization of constraints for high dimensional regression problemsA general framework for Bayes structured linear modelsOracle inequalities and optimal inference under group sparsityEstimation and variable selection with exponential weightsRanking-Based Variable Selection for high-dimensional dataA two-stage regularization method for variable selection and forecasting in high-order interaction modelNormalized and standard Dantzig estimators: two approachesRobust low-rank multiple kernel learning with compound regularizationRegularization and the small-ball method. I: Sparse recoveryPivotal estimation via square-root lasso in nonparametric regressionUnnamed ItemRandomized pick-freeze for sparse Sobol indices estimation in high dimensionSPADES and mixture modelsSimultaneous feature selection and clustering based on square root optimizationQuasi-likelihood and/or robust estimation in high dimensionsRandomized maximum-contrast selection: subagging for large-scale regressionVariable selection with Hamming lossVariable selection with spatially autoregressive errors: a generalized moments Lasso estimatorTuning parameter calibration for \(\ell_1\)-regularized logistic regressionThe Dantzig Selector in Cox's Proportional Hazards ModelOracle Inequalities for Convex Loss Functions with Nonlinear Targets


Uses Software



Cites Work




This page was built for publication: Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators