Overview of debiased Lasso in high-dimensional linear model
From MaRDI portal
Publication:6181971
Authors: Yanlin Tang
Publication date: 23 January 2024
Full work available at URL: http://aps.ecnu.edu.cn/EN/10.3969/j.issn.1001-4268.2023.03.010
Recommendations
- Debiasing the debiased Lasso with bootstrap
- On the asymptotic variance of the debiased Lasso
- Projection-based Inference for High-dimensional Linear Models
- Posterior asymptotic normality for an individual coordinate in high-dimensional linear regression
- Debiasing the Lasso: optimal sample size for Gaussian designs
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- High-dimensional simultaneous inference with the bootstrap
- Title not available (Why is that?)
- Statistics for high-dimensional data. Methods, theory and applications.
- Lasso-type recovery of sparse representations for high-dimensional data
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Title not available (Why is that?)
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Sparse inverse covariance estimation with the graphical lasso
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Asymptotics for Lasso-type estimators.
- Scaled sparse linear regression
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Comments on: \(\ell _{1}\)-penalization for mixture regression models
- Comments on: \(\ell_{1}\)-penalization for mixture regression models
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Atomic decomposition by basis pursuit
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Corrigendum in “Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise” [Mar 06 1030-1051]
Cited In (2)
This page was built for publication: Overview of debiased Lasso in high-dimensional linear model
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6181971)