Stable estimation of a covariance matrix guided by nuclear norm penalties
From MaRDI portal
(Redirected from Publication:1623701)
Abstract: Estimation of covariance matrices or their inverses plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. In this paper we present an intuitive prior that shrinks the classic sample covariance estimator towards a stable target. We prove that our estimator is consistent and asymptotically efficient. Thus, it gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. We also demonstrate the utility of our estimator in two standard situations -- discriminant analysis and EM clustering -- when the number of samples is dominated by or comparable to the number of covariates.
Recommendations
- Condition-number-regularized covariance estimation
- A well-conditioned and sparse estimation of covariance and inverse covariance matrices using a joint penalty
- A well-conditioned estimator for large-dimensional covariance matrices
- Covariance matrix selection and estimation via penalised normal likelihood
- Estimation of covariance matrices based on hierarchical inverse-Wishart priors
Cites work
- scientific article; zbMATH DE number 6381735 (Why is no real title available?)
- scientific article; zbMATH DE number 847272 (Why is no real title available?)
- scientific article; zbMATH DE number 3244317 (Why is no real title available?)
- A constrained formulation of maximum-likelihood estimation for normal mixture distributions
- A likelihood-based constrained algorithm for multivariate normal mixture models
- A trace inequality of John von Neumann
- A well-conditioned estimator for large-dimensional covariance matrices
- Bayesian regularization for normal mixture estimation and model-based clustering
- Constrained monotone EM algorithms for finite mixture of multivariate Gaussians
- Covariance estimation: the GLM and regularization perspectives
- Covariance matrix selection and estimation via penalised normal likelihood
- Estimation of a covariance matrix under Stein's loss
- Estimation of high-dimensional low-rank matrices
- Estimation of the multivariate normal covariance matrix under some restrictions
- Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions
- Finite mixture models
- Flexible covariance estimation in graphical Gaussian models
- High-dimensional covariance estimation
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- High-dimensional covariance matrix estimation in approximate factor models
- Hub Discovery in Partial Correlation Graphs
- Large-Scale Correlation Screening
- Mathematical and statistical methods for genetic analysis.
- Minimax estimation of large covariance matrices under \(\ell_1\)-norm
- Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
- Nonlinear shrinkage estimation of large-dimensional covariance matrices
- Nonparametric estimation of large covariance matrices of longitudinal data
- Partial correlation estimation by joint sparse regression models
- Penalized Normal Likelihood and Ridge Regularization of Correlation and Covariance Matrices
- Randomized Algorithms for Matrices and Data
- Regularized estimation of large covariance matrices
- Regularized linear discriminant analysis and its application in microarrays
- Shrinkage Estimators for Covariance Matrices
- Simultaneous modelling of the Cholesky decomposition of several covariance matrices
- Sparse inverse covariance estimation with the graphical lasso
- The variational form of certain Bayes estimators
- Wishart distributions for decomposable covariance graph models
Cited in
(3)
This page was built for publication: Stable estimation of a covariance matrix guided by nuclear norm penalties
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1623701)