A well-conditioned and sparse estimation of covariance and inverse covariance matrices using a joint penalty
From MaRDI portal
Publication:2834445
zbMATH Open1392.62156arXiv1412.7907MaRDI QIDQ2834445FDOQ2834445
Authors: Ashwini Maurya
Publication date: 22 November 2016
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Abstract: We develop a method for estimating well-conditioned and sparse covariance and inverse covariance matrices from a sample of vectors drawn from a sub-gaussian distribution in high dimensional setting. The proposed estimators are obtained by minimizing the quadratic loss function and joint penalty of `1 norm and variance of its eigenvalues. In contrast to some of the existing methods of covariance and inverse covariance matrix estimation, where often the interest is to estimate a sparse matrix, the proposed method is flexible in estimating both a sparse and well-conditioned covariance matrix simultaneously. The proposed estimators are optimal in the sense that they achieve the minimax rate of estimation in operator norm for the underlying class of covariance and inverse covariance matrices. We give a very fast algorithm for computation of these covariance and inverse covariance matrices which is easily scalable to large scale data analysis problems. The simulation study for varying sample sizes and variables shows that the proposed estimators performs better than several other estimators for various choices of structured covariance and inverse covariance matrices. We also use our proposed estimator for tumor tissues classification using gene expression data and compare its performance with some other classification methods.
Full work available at URL: https://arxiv.org/abs/1412.7907
Recommendations
- Sparse permutation invariant covariance estimation
- A joint convex penalty for inverse covariance matrix estimation
- Condition-number-regularized covariance estimation
- Sparse estimation of high-dimensional inverse covariance matrices with explicit eigenvalue constraints
- High dimensional inverse covariance matrix estimation via linear programming
Cited In (12)
- Adjusting for high-dimensional covariates in sparse precision matrix estimation by \(\ell_1\)-penalization
- Sparse permutation invariant covariance estimation
- Nested kriging predictions for datasets with a large number of observations
- An efficient algorithm for sparse inverse covariance matrix estimation based on dual formulation
- An efficient numerical method for condition number constrained covariance matrix approximation
- Sparse estimation of high-dimensional inverse covariance matrices with explicit eigenvalue constraints
- A phase I change‐point method for high‐dimensional process with sparse mean shifts
- A joint convex penalty for inverse covariance matrix estimation
- Simultaneous multiple response regression and inverse covariance matrix estimation via penalized Gaussian maximum likelihood
- Stable portfolio selection strategy for mean-variance-CVaR model under high-dimensional scenarios
- Bandwidth selection for large covariance and precision matrices
- Stable estimation of a covariance matrix guided by nuclear norm penalties
Uses Software
This page was built for publication: A well-conditioned and sparse estimation of covariance and inverse covariance matrices using a joint penalty
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2834445)