Simultaneous analysis of Lasso and Dantzig selector
From MaRDI portal
Publication:2388978
DOI10.1214/08-AOS620zbMath1173.62022arXiv0801.1095OpenAlexW2116581043WikidataQ100786247 ScholiaQ100786247MaRDI QIDQ2388978
Alexandre B. Tsybakov, Ya'acov Ritov, Peter J. Bickel
Publication date: 22 July 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0801.1095
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05) Prediction theory (aspects of stochastic processes) (60G25)
Related Items (only showing first 100 items - show all)
Worst possible sub-directions in high-dimensional models ⋮ Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling ⋮ Regularity properties for sparse regression ⋮ Best subset selection via a modern optimization lens ⋮ An analysis of penalized interaction models ⋮ Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable ⋮ Minimum distance Lasso for robust high-dimensional regression ⋮ SLOPE is adaptive to unknown sparsity and asymptotically minimax ⋮ Asymptotic properties of lasso in high-dimensional partially linear models ⋮ Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model ⋮ The benefit of group sparsity in group inference with de-biased scaled group Lasso ⋮ Geometric inference for general high-dimensional linear inverse problems ⋮ Solution of linear ill-posed problems using overcomplete dictionaries ⋮ Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models ⋮ Econometric estimation with high-dimensional moment equalities ⋮ Conjugate gradient acceleration of iteratively re-weighted least squares methods ⋮ Group-wise semiparametric modeling: a SCSE approach ⋮ Estimation of matrices with row sparsity ⋮ Asymtotics of Dantzig selector for a general single-index model ⋮ Sharp MSE bounds for proximal denoising ⋮ The \(l_q\) consistency of the Dantzig selector for Cox's proportional hazards model ⋮ On the stability of sparse convolutions ⋮ Strong consistency of Lasso estimators ⋮ Recovery of high-dimensional sparse signals via \(\ell_1\)-minimization ⋮ Oracle inequalities for the lasso in the Cox model ⋮ Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap ⋮ Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness ⋮ Statistical significance in high-dimensional linear models ⋮ The geometry of least squares in the 21st century ⋮ The Dantzig selector and sparsity oracle inequalities ⋮ Sparse recovery under matrix uncertainty ⋮ Nearly optimal minimax estimator for high-dimensional sparse linear regression ⋮ Impacts of high dimensionality in finite samples ⋮ A partial overview of the theory of statistics with functional data ⋮ Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression ⋮ The \(L_1\) penalized LAD estimator for high dimensional linear regression ⋮ Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization ⋮ Adjusted regularized estimation in the accelerated failure time model with high dimensional covariates ⋮ Correlated variables in regression: clustering and sparse estimation ⋮ Grouping strategies and thresholding for high dimensional linear models ⋮ High-dimensional covariance matrix estimation with missing observations ⋮ Regularized 3D functional regression for brain image data via Haar wavelets ⋮ \(\ell_{1}\)-penalization for mixture regression models ⋮ Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models ⋮ Sparsity in multiple kernel learning ⋮ Phase transition in limiting distributions of coherence of high-dimensional random matrices ⋮ Shrinkage estimation for identification of linear components in additive models ⋮ Adaptive Dantzig density estimation ⋮ Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization ⋮ On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization ⋮ Non-convex penalized estimation in high-dimensional models with single-index structure ⋮ Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices ⋮ A majorization-minimization approach to variable selection using spike and slab priors ⋮ Sparse regression learning by aggregation and Langevin Monte-Carlo ⋮ Mirror averaging with sparsity priors ⋮ Transductive versions of the Lasso and the Dantzig selector ⋮ General nonexact oracle inequalities for classes with a subexponential envelope ⋮ Regularization for Cox's proportional hazards model with NP-dimensionality ⋮ High-dimensional Cox regression analysis in genetic studies with censored survival outcomes ⋮ Variable selection in infinite-dimensional problems ⋮ Estimation of high-dimensional partially-observed discrete Markov random fields ⋮ A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization ⋮ Optimal computational and statistical rates of convergence for sparse nonconvex learning problems ⋮ A new perspective on least squares under convex constraint ⋮ The horseshoe estimator: posterior concentration around nearly black vectors ⋮ \(L_1\)-penalization in functional linear regression with subgaussian design ⋮ On higher order isotropy conditions and lower bounds for sparse quadratic forms ⋮ Comment on ``Hypothesis testing by convex optimization ⋮ Normalized and standard Dantzig estimators: two approaches ⋮ On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property ⋮ Oracle inequalities for high dimensional vector autoregressions ⋮ Weighted \(\ell_1\)-penalized corrected quantile regression for high dimensional measurement error models ⋮ Robust inference on average treatment effects with possibly more covariates than observations ⋮ Cox process functional learning ⋮ Estimation of average treatment effects with panel data: asymptotic theory and implementation ⋮ Additive model selection ⋮ Sparse recovery under weak moment assumptions ⋮ A Cluster Elastic Net for Multivariate Regression ⋮ Group Inference in High Dimensions with Applications to Hierarchical Testing ⋮ Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error ⋮ Exponential screening and optimal rates of sparse estimation ⋮ Estimation of high-dimensional low-rank matrices ⋮ Estimation of (near) low-rank matrices with noise and high-dimensional scaling ⋮ Performance guarantees for individualized treatment rules ⋮ Model-assisted inference for treatment effects using regularized calibrated estimation with high-dimensional data ⋮ On asymptotically optimal confidence regions and tests for high-dimensional models ⋮ Lasso Inference for High-Dimensional Time Series ⋮ Regularized estimation in sparse high-dimensional time series models ⋮ Distribution-Free Predictive Inference For Regression ⋮ Selection by partitioning the solution paths ⋮ Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements ⋮ Regularizing Double Machine Learning in Partially Linear Endogenous Models ⋮ A unified approach to model selection and sparse recovery using regularized least squares ⋮ Geometric median and robust estimation in Banach spaces ⋮ Factor-Adjusted Regularized Model Selection ⋮ Sparse Sliced Inverse Regression Via Lasso ⋮ Asymptotic normality and optimalities in estimation of large Gaussian graphical models ⋮ Gaussian graphical model estimation with false discovery rate control ⋮ Signal extraction approach for sparse multivariate response regression ⋮ Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Dantzig selector and sparsity oracle inequalities
- Sparsity in penalized empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Functional aggregation for nonparametric regression.
- Asymptotics for Lasso-type estimators.
- Least angle regression. (With discussion)
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- Pathwise coordinate optimization
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- The Group Lasso for Logistic Regression
- A new approach to variable selection in least squares problems
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Sparse Density Estimation with ℓ1 Penalties
This page was built for publication: Simultaneous analysis of Lasso and Dantzig selector