The benefit of group sparsity in group inference with de-biased scaled group Lasso
From MaRDI portal
(Redirected from Publication:309547)
Abstract: We study confidence regions and approximate chi-squared tests for variable groups in high-dimensional linear regression. When the size of the group is small, low-dimensional projection estimators for individual coefficients can be directly used to construct efficient confidence regions and p-values for the group. However, the existing analyses of low-dimensional projection estimators do not directly carry through for chi-squared-based inference of a large group of variables without inflating the sample size by a factor of the group size. We propose to de-bias a scaled group Lasso for chi-squared-based statistical inference for potentially very large groups of variables. We prove that the proposed methods capture the benefit of group sparsity under proper conditions, for statistical inference of the noise level and variable groups, large and small. Such benefit is especially strong when the group size is large.
Recommendations
- Group Bound: Confidence Intervals for Groups of Variables in Sparse High Dimensional Regression Without Assumptions on the Design
- The benefit of group sparsity
- Group inference in high dimensions with applications to hierarchical testing
- Debiasing the Lasso: optimal sample size for Gaussian designs
- The de-biased group Lasso estimation for varying coefficient models
Cites work
- scientific article; zbMATH DE number 490141 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A constrained \(\ell _{1}\) minimization approach to sparse precision matrix estimation
- A group bridge approach for variable selection
- A selective review of group selection in high-dimensional models
- Asymptotic normality and optimalities in estimation of large Gaussian graphical models
- Asymptotics for Lasso-type estimators.
- Can one estimate the conditional distribution of post-model-selection estimators?
- Comments on: \(\ell _{1}\)-penalization for mixture regression models
- Comments on: \(\ell_{1}\)-penalization for mixture regression models
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Confidence intervals for high-dimensional inverse covariance estimation
- Consistency of the group Lasso and multiple kernel learning
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Decoding by Linear Programming
- Hanson-Wright inequality and sub-Gaussian concentration
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- High-dimensional variable selection
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Model Selection and Estimation in Regression with Grouped Variables
- Nearly unbiased variable selection under minimax concave penalty
- On asymptotically optimal confidence regions and tests for high-dimensional models
- On the asymptotic properties of the group lasso estimator for linear models
- On the conditions used to prove oracle results for the Lasso
- Oracle inequalities and optimal inference under group sparsity
- Quantile regression for competing risks data with missing cause of failure
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Rejoinder: ``A significance test for the lasso
- Scaled sparse linear regression
- Shifting Inequality and Recovery of Sparse Signals
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse matrix inversion with scaled Lasso
- Sparse principal component analysis and iterative thresholding
- Spectral norm of products of random and deterministic matrices
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Statistical significance in high-dimensional linear models
- Statistics for high-dimensional data. Methods, theory and applications.
- Support union recovery in high-dimensional multivariate regression
- The Dantzig selector and sparsity oracle inequalities
- The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
- The benefit of group sparsity
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Worst possible sub-directions in high-dimensional models
- \(\ell_{1}\)-penalization for mixture regression models
- \(p\)-values for high-dimensional regression
Cited in
(22)- Multivariate log-contrast regression with sub-compositional predictors: testing the association between preterm infants' gut microbiome and neurobehavioral outcomes
- Covariate-adjusted inference for differential analysis of high-dimensional networks
- AIC for the group Lasso in generalized linear models
- Group inference in high dimensions with applications to hierarchical testing
- Estimator augmentation with applications in high-dimensional group inference
- Multicarving for high-dimensional post-selection inference
- Rejoinder on: ``High-dimensional simultaneous inference with the bootstrap
- An Interactive Greedy Approach to Group Sparsity in High Dimensions
- Statistical inference in sparse high-dimensional additive models
- Generalized linear models with structured sparsity estimators
- Spatially relaxed inference on high-dimensional linear models
- Linear hypothesis testing in dense high-dimensional linear models
- Constructing confidence intervals for the signals in sparse phase retrieval
- The de-biased group Lasso estimation for varying coefficient models
- Moving force identification based on group Lasso and compressed sensing
- scientific article; zbMATH DE number 7370643 (Why is no real title available?)
- The benefit of group sparsity
- Tuning-free heterogeneous inference in massive networks
- scientific article; zbMATH DE number 7306864 (Why is no real title available?)
- Generalized matrix decomposition regression: estimation and inference for two-way structured data
- Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference
- Double bias correction for high-dimensional sparse additive hazards regression with covariate measurement errors
This page was built for publication: The benefit of group sparsity in group inference with de-biased scaled group Lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q309547)