Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
From MaRDI portal
Publication:2426826
DOI10.1214/08-EJS177zbMath1306.62155arXiv0801.4610OpenAlexW3100190044WikidataQ105584246 ScholiaQ105584246MaRDI QIDQ2426826
Publication date: 14 May 2008
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0801.4610
Related Items
Worst possible sub-directions in high-dimensional models, Nonnegative-Lasso and application in index tracking, Iterative algorithm for discrete structure recovery, De-biasing the Lasso with degrees-of-freedom adjustment, Sliding window strategy for convolutional spike sorting with Lasso. Algorithm, theoretical guarantees and complexity, Estimation of matrices with row sparsity, Extensions of stability selection using subsamples of observations and covariates, Strong consistency of Lasso estimators, Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity, Sparse recovery under matrix uncertainty, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Bayesian linear regression with sparse priors, High-dimensional covariance matrix estimation with missing observations, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Variable selection, monotone likelihood ratio and group sparsity, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Robust machine learning by median-of-means: theory and practice, Adaptive Dantzig density estimation, Variable selection and regularization via arbitrary rectangle-range generalized elastic net, Asymptotic theory in network models with covariates and a growing number of node parameters, Regularizers for structured sparsity, Multi-stage convex relaxation for feature selection, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, On the conditions used to prove oracle results for the Lasso, PAC-Bayesian bounds for sparse regression estimation with exponential weights, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Least squares after model selection in high-dimensional sparse models, Calibrating nonconvex penalized regression in ultra-high dimension, Transductive versions of the Lasso and the Dantzig selector, General nonexact oracle inequalities for classes with a subexponential envelope, Generalization of constraints for high dimensional regression problems, A general framework for Bayes structured linear models, Oracle inequalities and optimal inference under group sparsity, Estimation and variable selection with exponential weights, Ranking-Based Variable Selection for high-dimensional data, A two-stage regularization method for variable selection and forecasting in high-order interaction model, Normalized and standard Dantzig estimators: two approaches, Robust low-rank multiple kernel learning with compound regularization, Regularization and the small-ball method. I: Sparse recovery, Pivotal estimation via square-root lasso in nonparametric regression, Unnamed Item, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, SPADES and mixture models, Simultaneous feature selection and clustering based on square root optimization, Quasi-likelihood and/or robust estimation in high dimensions, Randomized maximum-contrast selection: subagging for large-scale regression, Variable selection with Hamming loss, Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator, Tuning parameter calibration for \(\ell_1\)-regularized logistic regression, The Dantzig Selector in Cox's Proportional Hazards Model, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector and sparsity oracle inequalities
- Sparsity in penalized empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Asymptotics for Lasso-type estimators.
- Least angle regression. (With discussion)
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- Atomic Decomposition by Basis Pursuit