Worst possible sub-directions in high-dimensional models
From MaRDI portal
Publication:268764
DOI10.1016/J.JMVA.2015.09.018zbMATH Open1334.62133arXiv1403.7023OpenAlexW1819596675MaRDI QIDQ268764FDOQ268764
Authors: Sara Van De Geer
Publication date: 15 April 2016
Published in: Journal of Multivariate Analysis (Search for Journal in Brave)
Abstract: We examine the rate of convergence of the Lasso estimator of lower dimensional components of the high-dimensional parameter. Under bounds on the -norm on the worst possible sub-direction these rates are of order where is the total number of parameters, represents a subset of the parameters and is the number of observations. We also derive rates in sup-norm in terms of the rate of convergence in -norm. The irrepresentable condition on a set requires that the -norm of the worst possible sub-direction is sufficiently smaller than one. In that case sharp oracle results can be obtained. Moreover, if the coefficients in are small enough the Lasso will put these coefficients to zero. This extends known results which say that the irrepresentable condition on the inactive set (the set where coefficients are exactly zero) implies no false positives. We further show that by de-sparsifying one obtains fast rates in supremum norm without conditions on the worst possible sub-direction. The main assumption here is that approximate sparsity is of order . The results are extended to M-estimation with -penalty for generalized linear models and exponential families for example. For the graphical Lasso this leads to an extension of known results to the case where the precision matrix is only approximately sparse. The bounds we provide are non-asymptotic but we also present asymptotic formulations for ease of interpretation.
Full work available at URL: https://arxiv.org/abs/1403.7023
Recommendations
- scientific article; zbMATH DE number 7168272
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Strong consistency of Lasso estimators
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
Cites Work
- Confidence intervals for high-dimensional inverse covariance estimation
- Functional data analysis.
- Nonparametric functional data analysis. Theory and practice.
- A partial overview of the theory of statistics with functional data
- Inference for functional data with applications
- Variable selection in infinite-dimensional problems
- Title not available (Why is that?)
- Statistics for high-dimensional data. Methods, theory and applications.
- Factor models and variable selection in high-dimensional regression analysis
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- High-dimensional graphs and variable selection with the Lasso
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Title not available (Why is that?)
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Inference on treatment effects after selection among high-dimensional controls
- Title not available (Why is that?)
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Contributions in infinite-dimensional statistics and related topics. Selected papers from the 3rd international workshop on functional and operatorial statistics (IWFOS'2014), Stresa, Italy, June 19--21, 2014
- Generic chaining and the \(\ell _{1}\)-penalty
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- \(L_1\)-penalization in functional linear regression with subgaussian design
- Uniform post-selection inference for least absolute deviation regression and other Z-estimation problems
- Weakly decomposable regularization penalties and structured sparsity
Cited In (5)
- Title not available (Why is that?)
- Confidence intervals for high-dimensional inverse covariance estimation
- Honest confidence regions and optimality in high-dimensional precision matrix estimation
- An introduction to recent advances in high/infinite dimensional statistics
- The benefit of group sparsity in group inference with de-biased scaled group Lasso
Uses Software
This page was built for publication: Worst possible sub-directions in high-dimensional models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q268764)