Debiasing the Lasso: optimal sample size for Gaussian designs
From MaRDI portal
Publication:1991670
DOI10.1214/17-AOS1630zbMath1407.62270arXiv1508.02757MaRDI QIDQ1991670
Adel Javanmard, Andrea Montanari
Publication date: 30 October 2018
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1508.02757
Asymptotic properties of parametric estimators (62F12) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05)
Related Items (48)
An optimal statistical and computational framework for generalized tensor estimation ⋮ Significance testing in non-sparse high-dimensional linear models ⋮ Testability of high-dimensional linear models with nonsparse structures ⋮ Semi-supervised empirical risk minimization: using unlabeled data to improve prediction ⋮ De-biasing the Lasso with degrees-of-freedom adjustment ⋮ Ridge regression revisited: debiasing, thresholding and bootstrap ⋮ Projection-based Inference for High-dimensional Linear Models ⋮ Unnamed Item ⋮ An improved algorithm for high-dimensional continuous threshold expectile model with variance heterogeneity ⋮ Exploiting Disagreement Between High-Dimensional Variable Selectors for Uncertainty Visualization ⋮ Predictor ranking and false discovery proportion control in high-dimensional regression ⋮ Compositional knockoff filter for high‐dimensional regression analysis of microbiome data ⋮ A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates ⋮ Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution ⋮ Inference and Estimation for Random Effects in High-Dimensional Linear Mixed Models ⋮ Debiasing the debiased Lasso with bootstrap ⋮ Detangling robustness in high dimensions: composite versus model-averaged estimation ⋮ Debiasing convex regularized estimators and interval estimation in linear models ⋮ Unnamed Item ⋮ Unnamed Item ⋮ UNIFORM-IN-SUBMODEL BOUNDS FOR LINEAR REGRESSION IN A MODEL-FREE FRAMEWORK ⋮ Forward-selected panel data approach for program evaluation ⋮ Online Debiasing for Adaptively Collected High-Dimensional Data With Applications to Time Series Analysis ⋮ Entrywise limit theorems for eigenvectors of signal-plus-noise matrix models with weak signals ⋮ Universality of regularized regression estimators in high dimensions ⋮ The Lasso with general Gaussian designs with applications to hypothesis testing ⋮ An integrated surrogate model constructing method: annealing combinable Gaussian process ⋮ Estimation and inference in sparse multivariate regression and conditional Gaussian graphical models under an unbalanced distributed setting ⋮ Spike-and-Slab Group Lassos for Grouped Regression and Sparse Generalized Additive Models ⋮ Variable selection in the Box-Cox power transformation model ⋮ Adaptive estimation of high-dimensional signal-to-noise ratios ⋮ The likelihood ratio test in high-dimensional logistic regression is asymptotically a rescaled Chi-square ⋮ Variable selection via adaptive false negative control in linear regression ⋮ Optimal sparsity testing in linear regression model ⋮ Inference without compatibility: using exponential weighting for inference on a parameter of a linear model ⋮ On rank estimators in increasing dimensions ⋮ The de-biased group Lasso estimation for varying coefficient models ⋮ Second-order Stein: SURE for SURE and other applications in high-dimensional inference ⋮ Unnamed Item ⋮ Control variate selection for Monte Carlo integration ⋮ Spectral method and regularized MLE are both optimal for top-\(K\) ranking ⋮ Scale calibration for high-dimensional robust regression ⋮ On the asymptotic variance of the debiased Lasso ⋮ Spatially relaxed inference on high-dimensional linear models ⋮ Ensemble Kalman inversion for sparse learning of dynamical systems from time-averaged data ⋮ An \({\ell_p}\) theory of PCA and spectral clustering ⋮ Information criteria bias correction for group selection ⋮ Asymptotic normality of robust \(M\)-estimators with convex penalty
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Nearly unbiased variable selection under minimax concave penalty
- Confidence intervals for high-dimensional inverse covariance estimation
- Honest confidence regions and optimality in high-dimensional precision matrix estimation
- Familywise error rate control via knockoffs
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Statistical significance in high-dimensional linear models
- High-dimensional inference in misspecified linear models
- Statistics for high-dimensional data. Methods, theory and applications.
- \(\ell_{1}\)-penalization for mixture regression models
- High-dimensional variable selection
- Controlling the false discovery rate via knockoffs
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- Least squares after model selection in high-dimensional sparse models
- Confidence intervals for high-dimensional linear regression: minimax rates and adaptivity
- A significance test for the lasso
- Rejoinder: ``A significance test for the lasso
- Universality in polytope phase transitions and message passing algorithms
- Simultaneous analysis of Lasso and Dantzig selector
- Accuracy assessment for high-dimensional linear regression
- On the ``degrees of freedom of the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
- Reconstruction From Anisotropic Random Measurements
- Scaled sparse linear regression
- Cross validation in LASSO and its acceleration
- A study of error variance estimation in Lasso regression
- Stable recovery of sparse overcomplete representations in the presence of noise
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Uncertainty principles and ideal atomic decomposition
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Stability Selection
- The LASSO Risk for Gaussian Matrices
- The Noise-Sensitivity Phase Transition in Compressed Sensing
- The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- EigenPrism: Inference for High Dimensional Signal-to-Noise Ratios
- Group Bound: Confidence Intervals for Groups of Variables in Sparse High Dimensional Regression Without Assumptions on the Design
- Stable signal recovery from incomplete and inaccurate measurements
- Some Comments on C P
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- The Estimation of Prediction Error
- A new look at the statistical model identification
This page was built for publication: Debiasing the Lasso: optimal sample size for Gaussian designs