The Dantzig selector and sparsity oracle inequalities
From MaRDI portal
Publication:605023
DOI10.3150/09-BEJ187zbMath1452.62486arXiv0909.0861OpenAlexW2005838333WikidataQ105584437 ScholiaQ105584437MaRDI QIDQ605023
Publication date: 12 November 2010
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0909.0861
Related Items
Regularity properties for sparse regression ⋮ Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable ⋮ Some sharp performance bounds for least squares regression with \(L_1\) regularization ⋮ The benefit of group sparsity in group inference with de-biased scaled group Lasso ⋮ Simultaneous analysis of Lasso and Dantzig selector ⋮ Recovery error analysis of noisy measurement in compressed sensing ⋮ Oracle inequalities for the lasso in the Cox model ⋮ Estimation and variable selection in partial linear single index models with error-prone linear covariates ⋮ Statistical significance in high-dimensional linear models ⋮ Sparse recovery under matrix uncertainty ⋮ Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression ⋮ High-dimensional covariance matrix estimation with missing observations ⋮ Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso ⋮ \(\ell_{1}\)-penalization for mixture regression models ⋮ Sparsity in multiple kernel learning ⋮ Analysis of sparse recovery for Legendre expansions using envelope bound ⋮ Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression ⋮ Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators ⋮ Expectile trace regression via low-rank and group sparsity regularization ⋮ Goodness-of-Fit Tests for High Dimensional Linear Models ⋮ High-dimensional additive hazards models and the lasso ⋮ The Lasso problem and uniqueness ⋮ On the conditions used to prove oracle results for the Lasso ⋮ PAC-Bayesian bounds for sparse regression estimation with exponential weights ⋮ The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso) ⋮ Mirror averaging with sparsity priors ⋮ Transductive versions of the Lasso and the Dantzig selector ⋮ General nonexact oracle inequalities for classes with a subexponential envelope ⋮ Regularization for Cox's proportional hazards model with NP-dimensionality ⋮ Generalization of constraints for high dimensional regression problems ⋮ Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors ⋮ Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion ⋮ Factor models and variable selection in high-dimensional regression analysis ⋮ A new perspective on least squares under convex constraint ⋮ \(L_1\)-penalization in functional linear regression with subgaussian design ⋮ Normalized and standard Dantzig estimators: two approaches ⋮ Oracle inequalities for the Lasso in the additive hazards model with interval-censored data ⋮ High-dimensional variable screening and bias in subsequent inference, with an empirical comparison ⋮ Exponential screening and optimal rates of sparse estimation ⋮ Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models ⋮ Quasi-likelihood and/or robust estimation in high dimensions ⋮ A selective review of group selection in high-dimensional models ⋮ A general theory of concave regularization for high-dimensional sparse estimation problems ⋮ Asymptotic normality and optimalities in estimation of large Gaussian graphical models ⋮ Comments on: \(\ell _{1}\)-penalization for mixture regression models ⋮ Sparse recovery in convex hulls via entropy penalization ⋮ Simulation-based Value-at-Risk for nonlinear portfolios ⋮ The Partial Linear Model in High Dimensions ⋮ Weaker regularity conditions and sparse recovery in high-dimensional regression
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparsity in penalized empirical risk minimization
- Isoperimetric constants for product probability measures
- Weak convergence and empirical processes. With applications to statistics
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Decoding by Linear Programming
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution