High dimensional posterior convergence rates for decomposable graphical models
From MaRDI portal
Publication:902216
DOI10.1214/15-EJS1084zbMATH Open1329.62152MaRDI QIDQ902216FDOQ902216
Authors: Ruoxuan Xiang, Kshitij Khare, Malay Ghosh
Publication date: 7 January 2016
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.ejs/1451577218
Recommendations
- Bayesian inference for high-dimensional decomposable graphs
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Posterior convergence rates for estimating large precision matrices using graphical models
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
Bayesian inference (62F15) Asymptotic properties of nonparametric inference (62G20) Graphical methods in statistics (62A09)
Cites Work
- High-dimensional graphs and variable selection with the Lasso
- Covariance regularization by thresholding
- Conjugate priors for exponential families
- Hyper Markov laws in the statistical analysis of decomposable graphical models
- Sparse inverse covariance estimation with the graphical lasso
- On the distribution of the largest eigenvalue in principal components analysis
- Regularized estimation of large covariance matrices
- Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
- Model selection and estimation in the Gaussian graphical model
- Title not available (Why is that?)
- Partial correlation estimation by joint sparse regression models
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- Wishart distributions for decomposable graphs
- Cholesky decomposition of a hyper inverse Wishart matrix
- Flexible covariance estimation in graphical Gaussian models
- Tracy-Widom limit for the largest eigenvalue of a large class of complex sample covariance matrices
- Covariance matrix selection and estimation via penalised normal likelihood
- Inequalities for the gamma function
- Asymptotic normality of posterior distributions for exponential families when the number of parameters tends to infinity.
- Hyper Inverse Wishart Distribution for Non-decomposable Graphs and its Application to Bayesian Inference for Gaussian Graphical Models
- A Monte Carlo method for computing the marginal likelihood in nondecomposable Gaussian graphical models
- Simple Linear-Time Algorithms to Test Chordality of Graphs, Test Acyclicity of Hypergraphs, and Selectively Reduce Acyclic Hypergraphs
- Algorithmic Aspects of Vertex Elimination on Graphs
- Some Extensions of W. Gautschi's Inequalities for the Gamma Function
- A Metropolis-Hastings based method for sampling from the \(G\)-Wishart distribution in Gaussian graphical models
- Posterior convergence rates for estimating large precision matrices using graphical models
- Simulation of hyper-inverse Wishart distributions for non-decomposable graphs
- Functionally compartible local characteristics for the local specification of priors in graphical models
- Bayesian structure learning in graphical models
Cited In (18)
- A permutation-based Bayesian approach for inverse covariance estimation
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Joint Bayesian Variable and DAG Selection Consistency for High-dimensional Regression Models with Network-structured Covariates
- On the non-local priors for sparsity selection in high-dimensional Gaussian DAG models
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- Consistent Bayesian sparsity selection for high-dimensional Gaussian DAG models with multiplicative and beta-mixture priors
- Parametrizations and reference priors for multinomial decomposable graphical models
- Estimating large precision matrices via modified Cholesky decomposition
- Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation
- Bayesian inference for high-dimensional decomposable graphs
- Covariate-Assisted Bayesian Graph Learning for Heterogeneous Data
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Precision matrix estimation under the horseshoe-like prior-penalty dual
- On the contraction properties of some high-dimensional quasi-posterior distributions
- Posterior convergence rates for high-dimensional precision matrix estimation using \(G\)-Wishart priors
- Bayesian bandwidth test and selection for high-dimensional banded precision matrices
- Development of network-guided transcriptomic risk score for disease prediction
- The Graphical Horseshoe Estimator for Inverse Covariance Matrices
Uses Software
This page was built for publication: High dimensional posterior convergence rates for decomposable graphical models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q902216)