Scalable Bayesian high-dimensional local dependence learning
From MaRDI portal
Publication:6122014
DOI10.1214/21-BA1299arXiv2109.11795OpenAlexW3202888691WikidataQ116033127 ScholiaQ116033127MaRDI QIDQ6122014FDOQ6122014
Authors: Kyoungjae Lee, Lizhen Lin
Publication date: 27 February 2024
Published in: Bayesian Analysis (Search for Journal in Brave)
Abstract: In this work, we propose a scalable Bayesian procedure for learning the local dependence structure in a high-dimensional model where the variables possess a natural ordering. The ordering of variables can be indexed by time, the vicinities of spatial locations, and so on, with the natural assumption that variables far apart tend to have weak correlations. Applications of such models abound in a variety of fields such as finance, genome associations analysis and spatial modeling. We adopt a flexible framework under which each variable is dependent on its neighbors or predecessors, and the neighborhood size can vary for each variable. It is of great interest to reveal this local dependence structure by estimating the covariance or precision matrix while yielding a consistent estimate of the varying neighborhood size for each variable. The existing literature on banded covariance matrix estimation, which assumes a fixed bandwidth cannot be adapted for this general setup. We employ the modified Cholesky decomposition for the precision matrix and design a flexible prior for this model through appropriate priors on the neighborhood sizes and Cholesky factors. The posterior contraction rates of the Cholesky factor are derived which are nearly or exactly minimax optimal, and our procedure leads to consistent estimates of the neighborhood size for all the variables. Another appealing feature of our procedure is its scalability to models with large numbers of variables due to efficient posterior inference without resorting to MCMC algorithms. Numerical comparisons are carried out with competitive methods, and applications are considered for some real datasets.
Full work available at URL: https://arxiv.org/abs/2109.11795
Cites Work
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Understanding predictive information criteria for Bayesian models
- Statistics for high-dimensional data. Methods, theory and applications.
- A new approach to Cholesky-based covariance regularization in high dimensions
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Bayesian fractional posteriors
- High dimensional sparse covariance estimation via directed acyclic graphs
- Regularized estimation of large covariance matrices
- \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs
- Title not available (Why is that?)
- On consistency and sparsity for principal components analysis in high dimensions
- Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
- Learning local dependence in ordered data
- Joint mean-covariance models with applications to longitudinal data: unconstrained parameterisation
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- An invariant form for the prior probability in estimation problems
- Covariance matrix selection and estimation via penalised normal likelihood
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Posterior convergence rates for estimating large precision matrices using graphical models
- Identifiability of Gaussian linear structural equation models with homogeneous and heterogeneous error variances
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Bayesian structure learning in graphical models
- Optimal Bayesian minimax rates for unconstrained large covariance matrices
- A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Estimating large precision matrices via modified Cholesky decomposition
- Minimax estimation of large precision matrices with bandable Cholesky factor
- Bayesian bandwidth test and selection for high-dimensional banded precision matrices
- Hypothesis testing for band size detection of high-dimensional banded precision matrices
- A permutation-based Bayesian approach for inverse covariance estimation
This page was built for publication: Scalable Bayesian high-dimensional local dependence learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6122014)