Estimating large precision matrices via modified Cholesky decomposition
From MaRDI portal
Publication:4986367
zbMATH Open1464.62294arXiv1707.01143MaRDI QIDQ4986367FDOQ4986367
Publication date: 27 April 2021
Abstract: We introduce the -banded Cholesky prior for estimating a high-dimensional bandable precision matrix via the modified Cholesky decomposition. The bandable assumption is imposed on the Cholesky factor of the decomposition. We obtained the P-loss convergence rate under the spectral norm and the matrix norm and the minimax lower bounds. Since the P-loss convergence rate (Lee and Lee (2017)) is stronger than the posterior convergence rate, the rates obtained are also posterior convergence rates. Furthermore, when the true precision matrix is a -banded matrix with some finite , the obtained P-loss convergence rates coincide with the minimax rates. The established convergence rates are slightly slower than the minimax lower bounds, but these are the fastest rates for bandable precision matrices among the existing Bayesian approaches. A simulation study is conducted to compare the performance to the other competitive estimators in various scenarios.
Full work available at URL: https://arxiv.org/abs/1707.01143
Recommendations
- Posterior convergence rates for estimating large precision matrices using graphical models
- Bayesian estimation of large precision matrix based on Cholesky decomposition
- Minimax estimation of large precision matrices with bandable Cholesky factor
- Bayesian bandwidth test and selection for high-dimensional banded precision matrices
- Forward adaptive banding for estimating large covariance matrices
Cites Work
- High dimensional covariance matrix estimation using a factor model
- Covariance regularization by thresholding
- High dimensional sparse covariance estimation via directed acyclic graphs
- Regularized estimation of large covariance matrices
- Title not available (Why is that?)
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
- Convergence rates of posterior distributions.
- Probabilistic graphical models.
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- Optimal rates of convergence for sparse covariance matrix estimation
- Title not available (Why is that?)
- Optimal rates of convergence for covariance matrix estimation
- Posterior convergence rates of Dirichlet mixtures at smooth densities
- Rate-optimal posterior contraction for sparse PCA
- Law of log determinant of sample covariance matrix and optimal estimation of differential entropy for high-dimensional Gaussian distributions
- Bernstein-von Mises theorems for functionals of the covariance matrix
- Posterior convergence rates for estimating large precision matrices using graphical models
- Minimax optimal estimation of general bandable covariance matrices
- Optimal estimation and rank detection for sparse spiked covariance matrices
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
- High dimensional posterior convergence rates for decomposable graphical models
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Estimation of functionals of sparse covariance matrices
- Bayesian structure learning in graphical models
- Optimal Bayesian minimax rates for unconstrained large covariance matrices
- Adaptive estimation of covariance matrices via Cholesky decomposition
- A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
Cited In (9)
- A new approach for ultrahigh dimensional precision matrix estimation
- Activation discovery with FDR control: application to fMRI data
- Bayesian joint inference for multiple directed acyclic graphs
- Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation
- Bayesian inference for high-dimensional decomposable graphs
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Precision matrix estimation under the horseshoe-like prior-penalty dual
- Post-processed posteriors for banded covariances
- Scalable Bayesian high-dimensional local dependence learning
This page was built for publication: Estimating large precision matrices via modified Cholesky decomposition
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4986367)