Scalable Bayesian high-dimensional local dependence learning
From MaRDI portal
Publication:6122014
Abstract: In this work, we propose a scalable Bayesian procedure for learning the local dependence structure in a high-dimensional model where the variables possess a natural ordering. The ordering of variables can be indexed by time, the vicinities of spatial locations, and so on, with the natural assumption that variables far apart tend to have weak correlations. Applications of such models abound in a variety of fields such as finance, genome associations analysis and spatial modeling. We adopt a flexible framework under which each variable is dependent on its neighbors or predecessors, and the neighborhood size can vary for each variable. It is of great interest to reveal this local dependence structure by estimating the covariance or precision matrix while yielding a consistent estimate of the varying neighborhood size for each variable. The existing literature on banded covariance matrix estimation, which assumes a fixed bandwidth cannot be adapted for this general setup. We employ the modified Cholesky decomposition for the precision matrix and design a flexible prior for this model through appropriate priors on the neighborhood sizes and Cholesky factors. The posterior contraction rates of the Cholesky factor are derived which are nearly or exactly minimax optimal, and our procedure leads to consistent estimates of the neighborhood size for all the variables. Another appealing feature of our procedure is its scalability to models with large numbers of variables due to efficient posterior inference without resorting to MCMC algorithms. Numerical comparisons are carried out with competitive methods, and applications are considered for some real datasets.
Cites work
- scientific article; zbMATH DE number 4070082 (Why is no real title available?)
- A new approach to Cholesky-based covariance regularization in high dimensions
- A permutation-based Bayesian approach for inverse covariance estimation
- A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
- An invariant form for the prior probability in estimation problems
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Bayesian bandwidth test and selection for high-dimensional banded precision matrices
- Bayesian fractional posteriors
- Bayesian structure learning in graphical models
- Covariance matrix selection and estimation via penalised normal likelihood
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Estimating large precision matrices via modified Cholesky decomposition
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- High dimensional sparse covariance estimation via directed acyclic graphs
- Hypothesis testing for band size detection of high-dimensional banded precision matrices
- Identifiability of Gaussian linear structural equation models with homogeneous and heterogeneous error variances
- Joint mean-covariance models with applications to longitudinal data: unconstrained parameterisation
- Learning local dependence in ordered data
- Minimax estimation of large precision matrices with bandable Cholesky factor
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- On consistency and sparsity for principal components analysis in high dimensions
- Optimal Bayesian minimax rates for unconstrained large covariance matrices
- Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
- Posterior convergence rates for estimating large precision matrices using graphical models
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Regularized estimation of large covariance matrices
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Statistics for high-dimensional data. Methods, theory and applications.
- Understanding predictive information criteria for Bayesian models
- \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs
This page was built for publication: Scalable Bayesian high-dimensional local dependence learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6122014)