Scalable Bayesian high-dimensional local dependence learning
From MaRDI portal
Publication:6122014
DOI10.1214/21-ba1299arXiv2109.11795OpenAlexW3202888691WikidataQ116033127 ScholiaQ116033127MaRDI QIDQ6122014
Publication date: 27 February 2024
Published in: Bayesian Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2109.11795
Cites Work
- Unnamed Item
- Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Understanding predictive information criteria for Bayesian models
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs
- Posterior convergence rates for estimating large precision matrices using graphical models
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Statistics for high-dimensional data. Methods, theory and applications.
- Optimal Bayesian minimax rates for unconstrained large covariance matrices
- Bayesian fractional posteriors
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- High dimensional sparse covariance estimation via directed acyclic graphs
- A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
- Bayesian structure learning in graphical models
- Identifiability of Gaussian linear structural equation models with homogeneous and heterogeneous error variances
- Minimax estimation of large precision matrices with bandable Cholesky factor
- Bayesian bandwidth test and selection for high-dimensional banded precision matrices
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Regularized estimation of large covariance matrices
- A new approach to Cholesky-based covariance regularization in high dimensions
- Hypothesis testing for band size detection of high-dimensional banded precision matrices
- Joint mean-covariance models with applications to longitudinal data: unconstrained parameterisation
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Estimating Large Precision Matrices via Modified Cholesky Decomposition
- A permutation-based Bayesian approach for inverse covariance estimation
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- Learning Local Dependence In Ordered Data
- Covariance matrix selection and estimation via penalised normal likelihood
- An invariant form for the prior probability in estimation problems