Worst possible sub-directions in high-dimensional models
From MaRDI portal
(Redirected from Publication:268764)
Abstract: We examine the rate of convergence of the Lasso estimator of lower dimensional components of the high-dimensional parameter. Under bounds on the -norm on the worst possible sub-direction these rates are of order where is the total number of parameters, represents a subset of the parameters and is the number of observations. We also derive rates in sup-norm in terms of the rate of convergence in -norm. The irrepresentable condition on a set requires that the -norm of the worst possible sub-direction is sufficiently smaller than one. In that case sharp oracle results can be obtained. Moreover, if the coefficients in are small enough the Lasso will put these coefficients to zero. This extends known results which say that the irrepresentable condition on the inactive set (the set where coefficients are exactly zero) implies no false positives. We further show that by de-sparsifying one obtains fast rates in supremum norm without conditions on the worst possible sub-direction. The main assumption here is that approximate sparsity is of order . The results are extended to M-estimation with -penalty for generalized linear models and exponential families for example. For the graphical Lasso this leads to an extension of known results to the case where the precision matrix is only approximately sparse. The bounds we provide are non-asymptotic but we also present asymptotic formulations for ease of interpretation.
Recommendations
- scientific article; zbMATH DE number 7168272
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Strong consistency of Lasso estimators
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 1181283 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A partial overview of the theory of statistics with functional data
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Confidence intervals for high-dimensional inverse covariance estimation
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Contributions in infinite-dimensional statistics and related topics. Selected papers from the 3rd international workshop on functional and operatorial statistics (IWFOS'2014), Stresa, Italy, June 19--21, 2014
- Factor models and variable selection in high-dimensional regression analysis
- Functional data analysis.
- Generic chaining and the \(\ell _{1}\)-penalty
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- High-dimensional generalized linear models and the lasso
- High-dimensional graphs and variable selection with the Lasso
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Inference for functional data with applications
- Inference on treatment effects after selection among high-dimensional controls
- Nonparametric functional data analysis. Theory and practice.
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Simultaneous analysis of Lasso and Dantzig selector
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Statistics for high-dimensional data. Methods, theory and applications.
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Uniform post-selection inference for least absolute deviation regression and other Z-estimation problems
- Variable selection in infinite-dimensional problems
- Weakly decomposable regularization penalties and structured sparsity
- \(L_1\)-penalization in functional linear regression with subgaussian design
Cited in
(5)- scientific article; zbMATH DE number 7168272 (Why is no real title available?)
- Confidence intervals for high-dimensional inverse covariance estimation
- Honest confidence regions and optimality in high-dimensional precision matrix estimation
- An introduction to recent advances in high/infinite dimensional statistics
- The benefit of group sparsity in group inference with de-biased scaled group Lasso
This page was built for publication: Worst possible sub-directions in high-dimensional models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q268764)