Estimation of linear projections of non-sparse coefficients in high-dimensional regression (Q2286364)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Estimation of linear projections of non-sparse coefficients in high-dimensional regression |
scientific article |
Statements
Estimation of linear projections of non-sparse coefficients in high-dimensional regression (English)
0 references
22 January 2020
0 references
A linear regression model \(Y=X\beta+\epsilon\), is considered where \(Y\) isthe \(n\)-dimensional vector of observed values of the response variable, \(X\) is a known design matrix, \(\beta\) is a \(p\)-dimensional vector of unknown parameters, and the components of the error vector \(\epsilon\) are i.i.d. with zero mean and finite variance. The response and the covariates are assumed centered and, for simplicity, the weak dependence induced by the centering is ignored. It is assumed that \(p\) is much larger than \(n\) and a sparse structure of \(X\) is not demanded. In the paper, the one-dimensional linear projection \(\theta=a^\top\beta\) is a parameter of interest. It is suggested to use an unbiased estimator \(\hat\theta=a^\top\hat\beta\), where \(\hat\beta\) is the least squares estimator, using the pseudo-inverse of \(X^\top X.\) Sufficient conditions for the weak consistency of \(\hat\theta\) are given, and it is shown that, in the normal model, the estimator is minimax and admissible. Thus, for linear projections no regularization or shrinkage is needed. The estimator is applied to a high-dimensional dataset from brain imaging, where it is shown that the signal is weak, non-sparse and significantly different from zero.
0 references
high-dimensional regression
0 references
linear projections
0 references
0 references