Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes
From MaRDI portal
Publication:6136582
DOI10.1214/23-AOS2280arXiv2101.04084OpenAlexW4386035702MaRDI QIDQ6136582
Publication date: 31 August 2023
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: Structure learning via MCMC sampling is known to be very challenging because of the enormous search space and the existence of Markov equivalent DAGs. Theoretical results on the mixing behavior are lacking. In this work, we prove the rapid mixing of a random walk Metropolis-Hastings algorithm, which reveals that the complexity of Bayesian learning of sparse equivalence classes grows only polynomially in and , under some high-dimensional assumptions. A series of high-dimensional consistency results is obtained, including the strong selection consistency of an empirical Bayes model for structure learning. Our proof is based on two new results. First, we derive a general mixing time bound on finite state spaces, which can be applied to local MCMC schemes for other model selection problems. Second, we construct high-probability search paths on the space of equivalence classes with node degree constraints by proving a combinatorial property of DAG comparisons. Simulation studies on the proposed MCMC sampler are conducted to illustrate the main theoretical findings.
Full work available at URL: https://arxiv.org/abs/2101.04084
finite Markov chainsrandom walk Metropolis-Hastingsrapid mixingstrong selection consistencyPoincaré-type inequalitygreedy equivalence search (GES)locally informed proposals
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Causation, prediction, and search
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Parameter priors for directed acyclic graphical models and the characterization of several probability distributions
- Improving the structure MCMC sampler for Bayesian networks by introducing a new edge reversal move
- Bayesian model averaging: A tutorial. (with comments and a rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Sparsistency and rates of convergence in large covariance matrix estimation
- A characterization of Markov equivalence classes for acyclic digraphs
- Being Bayesian about network structure. A Bayesian approach to structure discovery in Bayesian networks
- Improving Markov chain Monte Carlo model search for data mining
- Regularized estimation of large covariance matrices
- Bayesian model averaging and model selection for markov equivalence classes of acyclic digraphs
- \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs
- 10.1162/153244302760200696
- 10.1162/153244304773936045
- Bayesian Graphical Models for Discrete Data
- Minimax-optimal nonparametric regression in high dimensions
- Global identifiability of linear structural equation models
- Bayesian variable selection with shrinking and diffusing priors
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- Sparse Matrix Inversion with Scaled Lasso
- Learning Causal Bayesian Network Structures From Experimental Data
- Propagation of Probabilities, Means, and Variances in Mixed Graphical Association Models
- Bayesian Model Selection in High-Dimensional Settings
- Geometry of the faithfulness assumption in causal inference
- 10.1162/153244303321897717
- Improved Bounds for Mixing Rates of Markov Chains and Multicommodity Flow
- Reversible MCMC on Markov equivalence classes of sparse directed acyclic graphs
- On the computational complexity of high-dimensional Bayesian variable selection
- Structure discovery in Bayesian networks by sampling partial orders
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Bayesian structure learning in graphical models
- Learning Markov equivalence classes of directed acyclic graphs: an objective Bayes approach
- Equivalence and Synthesis of Causal Models
- Support consistency of direct sparse-change learning in Markov networks
- High-dimensional consistency in score-based and hybrid structure learning
- A general framework for Bayes structured linear models
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Posterior contraction in sparse generalized linear models
- Who learns better Bayesian network structures: accuracy and speed of structure learning algorithms
- Informed Proposals for Local MCMC in Discrete Spaces
- A Gibbs sampler for learning DAGs
- Consistency guarantees for greedy permutation-based causal inference algorithms
- Compatible priors for model selection of high-dimensional Gaussian DAGs
- Dimension-Free Mixing for High-Dimensional Bayesian Variable Selection
- Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes
Cited In (1)
This page was built for publication: Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136582)