Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes
From MaRDI portal
Publication:6136582
Abstract: Structure learning via MCMC sampling is known to be very challenging because of the enormous search space and the existence of Markov equivalent DAGs. Theoretical results on the mixing behavior are lacking. In this work, we prove the rapid mixing of a random walk Metropolis-Hastings algorithm, which reveals that the complexity of Bayesian learning of sparse equivalence classes grows only polynomially in and , under some high-dimensional assumptions. A series of high-dimensional consistency results is obtained, including the strong selection consistency of an empirical Bayes model for structure learning. Our proof is based on two new results. First, we derive a general mixing time bound on finite state spaces, which can be applied to local MCMC schemes for other model selection problems. Second, we construct high-probability search paths on the space of equivalence classes with node degree constraints by proving a combinatorial property of DAG comparisons. Simulation studies on the proposed MCMC sampler are conducted to illustrate the main theoretical findings.
Recommendations
- On the computational complexity of high-dimensional Bayesian variable selection
- A Gibbs sampler for learning DAGs
- Large-sample learning of Bayesian networks is NP-hard
- Structure learning in Bayesian networks of a moderate size by efficient sampling
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
Cites work
- scientific article; zbMATH DE number 1097578 (Why is no real title available?)
- scientific article; zbMATH DE number 2150792 (Why is no real title available?)
- scientific article; zbMATH DE number 1894279 (Why is no real title available?)
- scientific article; zbMATH DE number 7370527 (Why is no real title available?)
- 10.1162/153244302760200696
- 10.1162/153244303321897717
- 10.1162/153244304773936045
- A Gibbs sampler for learning DAGs
- A characterization of Markov equivalence classes for acyclic digraphs
- A general framework for Bayes structured linear models
- Bayesian Graphical Models for Discrete Data
- Bayesian model averaging and model selection for markov equivalence classes of acyclic digraphs
- Bayesian model averaging: A tutorial. (with comments and a rejoinder).
- Bayesian model selection in high-dimensional settings
- Bayesian structure learning in graphical models
- Bayesian variable selection with shrinking and diffusing priors
- Being Bayesian about network structure. A Bayesian approach to structure discovery in Bayesian networks
- Causation, prediction, and search
- Compatible priors for model selection of high-dimensional Gaussian DAGs
- Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes
- Consistency guarantees for greedy permutation-based causal inference algorithms
- Dimension-Free Mixing for High-Dimensional Bayesian Variable Selection
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Equivalence and Synthesis of Causal Models
- Estimating high-dimensional directed acyclic graphs with the PC-algorithm
- Geometry of the faithfulness assumption in causal inference
- Global identifiability of linear structural equation models
- High-dimensional consistency in score-based and hybrid structure learning
- High-dimensional graphs and variable selection with the Lasso
- Improved Bounds for Mixing Rates of Markov Chains and Multicommodity Flow
- Improving Markov chain Monte Carlo model search for data mining
- Improving structure MCMC for Bayesian networks through Markov blanket resampling
- Improving the structure MCMC sampler for Bayesian networks by introducing a new edge reversal move
- Informed proposals for local MCMC in discrete spaces
- Learning Causal Bayesian Network Structures From Experimental Data
- Learning Markov equivalence classes of directed acyclic graphs: an objective Bayes approach
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Minimax-optimal nonparametric regression in high dimensions
- On the computational complexity of high-dimensional Bayesian variable selection
- Parameter priors for directed acyclic graphical models and the characterization of several probability distributions
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- Posterior contraction in sparse generalized linear models
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Probabilistic graphical models.
- Propagation of Probabilities, Means, and Variances in Mixed Graphical Association Models
- Regularized estimation of large covariance matrices
- Reversible MCMC on Markov equivalence classes of sparse directed acyclic graphs
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Sparse matrix inversion with scaled Lasso
- Sparsistency and rates of convergence in large covariance matrix estimation
- Structure discovery in Bayesian networks by sampling partial orders
- Support consistency of direct sparse-change learning in Markov networks
- Who learns better Bayesian network structures: accuracy and speed of structure learning algorithms
- \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs
Cited in
(1)
This page was built for publication: Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136582)