Minimax estimation of large precision matrices with bandable Cholesky factor
From MaRDI portal
Abstract: Last decade witnesses significant methodological and theoretical advances in estimating large precision matrices. In particular, there are scientific applications such as longitudinal data, meteorology and spectroscopy in which the ordering of the variables can be interpreted through a bandable structure on the Cholesky factor of the precision matrix. However, the minimax theory has still been largely unknown, as opposed to the well established minimax results over the corresponding bandable covariance matrices. In this paper, we focus on two commonly used types of parameter spaces, and develop the optimal rates of convergence under both the operator norm and the Frobenius norm. A striking phenomenon is found: two types of parameter spaces are fundamentally different under the operator norm but enjoy the same rate optimality under the Frobenius norm, which is in sharp contrast to the equivalence of corresponding two types of bandable covariance matrices under both norms. This fundamental difference is established by carefully constructing the corresponding minimax lower bounds. Two new estimation procedures are developed: for the operator norm, our optimal procedure is based on a novel local cropping estimator targeting on all principle submatrices of the precision matrix while for the Frobenius norm, our optimal procedure relies on a delicate regression-based thresholding rule. Lepski's method is considered to achieve optimal adaptation. We further establish rate optimality in the nonparanormal model. Numerical studies are carried out to confirm our theoretical findings.
Recommendations
- Minimax optimal estimation of general bandable covariance matrices
- Estimating large precision matrices via modified Cholesky decomposition
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
- Posterior convergence rates for estimating large precision matrices using graphical models
Cites work
- scientific article; zbMATH DE number 3907540 (Why is no real title available?)
- scientific article; zbMATH DE number 28602 (Why is no real title available?)
- scientific article; zbMATH DE number 490141 (Why is no real title available?)
- A NEW MEASURE OF RANK CORRELATION
- A constrained \(\ell _{1}\) minimization approach to sparse precision matrix estimation
- Adaptive thresholding for sparse covariance matrix estimation
- Asymptotic normality and optimalities in estimation of large Gaussian graphical models
- Asymptotics of sample eigenstructure for a large dimensional spiked covariance model
- Banding sample autocovariance matrices of stationary processes
- Covariance matrix selection and estimation via penalised normal likelihood
- Covariance regularization by thresholding
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
- Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter vari\-ants
- First-Order Methods for Sparse Covariance Selection
- High dimensional inverse covariance matrix estimation via linear programming
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- High-dimensional graphs and variable selection with the Lasso
- Minimax and adaptive inference in nonparametric function estimation
- Model selection and estimation in the Gaussian graphical model
- Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
- Nonparametric estimation of large covariance matrices of longitudinal data
- On consistency and sparsity for principal components analysis in high dimensions
- On minimax wavelet estimators
- On the distribution of the largest eigenvalue in principal components analysis
- Operator norm consistent estimation of large-dimensional sparse covariance matrices
- Optimal rates of convergence for covariance matrix estimation
- Optimal rates of convergence for estimating Toeplitz covariance matrices
- Optimal rates of convergence for sparse covariance matrix estimation
- Posterior convergence rates for estimating large precision matrices using graphical models
- ROCKET: robust confidence intervals via Kendall's tau for transelliptical graphical models
- Regularized estimation of large covariance matrices
- Sparse estimation of large covariance matrices via a nested Lasso penalty
- Sparse matrix inversion with scaled Lasso
- Sparse permutation invariant covariance estimation
- Sparsistency and rates of convergence in large covariance matrix estimation
- Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries
- The nonparanormal: semiparametric estimation of high dimensional undirected graphs
- The strong limits of random matrix spectra for sample matrices of independent elements
- User-friendly covariance estimation for heavy-tailed distributions
Cited in
(7)- A generative approach to modeling data with quantitative and qualitative responses
- Minimax optimal estimation of general bandable covariance matrices
- A new approach for ultrahigh dimensional precision matrix estimation
- User-friendly covariance estimation for heavy-tailed distributions
- Estimating large precision matrices via modified Cholesky decomposition
- Computationally efficient banding of large covariance matrices for ordered data and connections to banding the inverse Cholesky factor
- Scalable Bayesian high-dimensional local dependence learning
This page was built for publication: Minimax estimation of large precision matrices with bandable Cholesky factor
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2215744)