On estimation of the diagonal elements of a sparse precision matrix
From MaRDI portal
Publication:302437
DOI10.1214/16-EJS1148zbMATH Open1342.62088arXiv1504.04696MaRDI QIDQ302437FDOQ302437
Samuel Balmand, Arnak S. Dalalyan
Publication date: 5 July 2016
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Abstract: In this paper, we present several estimators of the diagonal elements of the inverse of the covariance matrix, called precision matrix, of a sample of iid random vectors. The focus is on high dimensional vectors having a sparse precision matrix. It is now well understood that when the underlying distribution is Gaussian, the columns of the precision matrix can be estimated independently form one another by solving linear regression problems under sparsity constraints. This approach leads to a computationally efficient strategy for estimating the precision matrix that starts by estimating the regression vectors, then estimates the diagonal entries of the precision matrix and, in a final step, combines these estimators for getting estimators of the off-diagonal entries. While the step of estimating the regression vector has been intensively studied over the past decade, the problem of deriving statistically accurate estimators of the diagonal entries has received much less attention. The goal of the present paper is to fill this gap by presenting four estimators---that seem the most natural ones---of the diagonal entries of the precision matrix and then performing a comprehensive empirical evaluation of these estimators. The estimators under consideration are the residual variance, the relaxed maximum likelihood, the symmetry-enforced maximum likelihood and the penalized maximum likelihood. We show, both theoretically and empirically, that when the aforementioned regression vectors are estimated without error, the symmetry-enforced maximum likelihood estimator has the smallest estimation error. However, in a more realistic setting when the regression vector is estimated by a sparsity-favoring computationally efficient method, the qualities of the estimators become relatively comparable with a slight advantage for the residual variance estimator.
Full work available at URL: https://arxiv.org/abs/1504.04696
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the shortest spanning subtree of a graph and the traveling salesman problem
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- The nonparanormal: semiparametric estimation of high dimensional undirected graphs
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Improved matrix uncertainty selector
- Sparse nonparametric graphical models
- A note on two problems in connexion with graphs
- Introductory lectures on convex optimization. A basic course.
- Sparse inverse covariance estimation with the graphical lasso
- Least squares after model selection in high-dimensional sparse models
- Scaled sparse linear regression
- A Direct Estimation Approach to Sparse Linear Discriminant Analysis
- Model selection and estimation in the Gaussian graphical model
- \(\ell_{1}\)-penalization for mixture regression models
- High dimensional inverse covariance matrix estimation via linear programming
- Sparse Matrix Inversion with Scaled Lasso
- A Constrainedℓ1Minimization Approach to Sparse Precision Matrix Estimation
- Adaptive estimation of a quadratic functional by model selection.
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- Pivotal estimation via square-root lasso in nonparametric regression
- Conditional Means and Covariances of Normal Variables with Singular Covariance Matrix
- Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models
Cited In (6)
- Fast estimates for the diagonal of the inverse of large scale matrices appearing in applications
- Title not available (Why is that?)
- Variable selection for generalized linear model with highly correlated covariates
- The approximation characteristic of diagonal matrix in probabilistic setting
- Title not available (Why is that?)
- Robust Estimators in High-Dimensions Without the Computational Intractability
Uses Software
This page was built for publication: On estimation of the diagonal elements of a sparse precision matrix
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q302437)