The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
From MaRDI portal
Publication:1952206
DOI10.1214/11-EJS624zbMath1274.62471arXiv1001.5176OpenAlexW2084768311MaRDI QIDQ1952206
Shuheng Zhou, Sara van de Geer
Publication date: 28 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1001.5176
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07)
Related Items (37)
Endogenous treatment effect estimation using high-dimensional instruments and double selection ⋮ Best subset selection via a modern optimization lens ⋮ Monotone splines Lasso ⋮ Variable Selection With Second-Generation P-Values ⋮ Thresholding least-squares inference in high-dimensional regression models ⋮ Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization ⋮ Ridge regression revisited: debiasing, thresholding and bootstrap ⋮ Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles ⋮ \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs ⋮ Statistical significance in high-dimensional linear models ⋮ Rejoinder on: ``Hierarchical inference for genome-wide association studies: a view on methodology with software ⋮ D-trace estimation of a precision matrix using adaptive lasso penalties ⋮ A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates ⋮ A Unified Framework for Change Point Detection in High-Dimensional Linear Models ⋮ Weak Signal Identification and Inference in Penalized Likelihood Models for Categorical Responses ⋮ Testing stochastic dominance with many conditioning variables ⋮ Unnamed Item ⋮ Efficient estimation of approximate factor models via penalized maximum likelihood ⋮ A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics ⋮ High-dimensional simultaneous inference with the bootstrap ⋮ Positive-definite thresholding estimators of covariance matrices with zeros ⋮ An integrated surrogate model constructing method: annealing combinable Gaussian process ⋮ Calibrating nonconvex penalized regression in ultra-high dimension ⋮ Robust recovery of signals with partially known support information using weighted BPDN ⋮ A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al. ⋮ CAM: causal additive models, high-dimensional order search and penalized regression ⋮ High-dimensional variable screening and bias in subsequent inference, with an empirical comparison ⋮ Anℓ1-oracle inequality for the Lasso in multivariate finite mixture of multivariate Gaussian regression models ⋮ Regularized estimation in sparse high-dimensional time series models ⋮ High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi} ⋮ Quasi-likelihood and/or robust estimation in high dimensions ⋮ Discussion of: ``Grouping strategies and thresholding for high dimension linear models ⋮ High-dimensional variable selection via low-dimensional adaptive learning ⋮ Regularized rank-based estimation of high-dimensional nonparanormal graphical models ⋮ Lasso and probabilistic inequalities for multivariate point processes ⋮ Preconditioning the Lasso for sign consistency ⋮ Orthogonal one step greedy procedure for heteroscedastic linear models
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Statistics for high-dimensional data. Methods, theory and applications.
- The Dantzig selector and sparsity oracle inequalities
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Near-ideal model selection by \(\ell _{1}\) minimization
- High-dimensional variable selection
- Sparsity in penalized empirical risk minimization
- Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- A survey of cross-validation procedures for model selection
- Lasso-type recovery of sparse representations for high-dimensional data
- Risk bounds for model selection via penalization
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Least squares estimation with complexity penalties
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Decoding by Linear Programming
- The Group Lasso for Logistic Regression
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- On non-asymptotic bounds for estimation in generalized linear models with highly correlated design
- Stable signal recovery from incomplete and inaccurate measurements
This page was built for publication: The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)