Marginal pseudo-likelihood learning of discrete Markov network structures
From MaRDI portal
Abstract: Undirected graphical models known as Markov networks are popular for a wide variety of applications ranging from statistical physics to computational biology. Traditionally, learning of the network structure has been done under the assumption of chordality which ensures that efficient scoring methods can be used. In general, non-chordal graphs have intractable normalizing constants which renders the calculation of Bayesian and other scores difficult beyond very small-scale systems. Recently, there has been a surge of interest towards the use of regularized pseudo-likelihood methods for structural learning of large-scale Markov network models, as such an approach avoids the assumption of chordality. The currently available methods typically necessitate the use of a tuning parameter to adapt the level of regularization for a particular dataset, which can be optimized for example by cross-validation. Here we introduce a Bayesian version of pseudo-likelihood scoring of Markov networks, which enables an automatic regularization through marginalization over the nuisance parameters in the model. We prove consistency of the resulting MPL estimator for the network structure via comparison with the pseudo information criterion. Identification of the MPL-optimal network on a prescanned graph space is considered with both greedy hill climbing and exact pseudo-Boolean optimization algorithms. We find that for reasonable sample sizes the hill climbing approach most often identifies networks that are at a negligible distance from the restricted global optimum. Using synthetic and existing benchmark networks, the marginal pseudo-likelihood method is shown to generally perform favorably against recent popular inference methods for Markov networks.
Recommendations
- Structure learning of contextual Markov networks using marginal pseudo-likelihood
- High-dimensional structure learning of binary pairwise Markov networks: a comparative numerical study
- Learning Gaussian graphical models with fractional marginal pseudo-likelihood
- Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods
- Learning Markov networks: Maximum bounded tree-width graphs
Cited in
(16)- High-dimensional structure learning of sparse vector autoregressive models using fractional marginal pseudo-likelihood
- Guest editors' introduction to the special issue ``Network psychometrics in action: methodological innovations inspired by empirical problems
- Objective Bayesian edge screening and structure selection for Ising networks
- Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods
- Blankets joint posterior score for learning Markov network structures
- Loglinear model selection and human mobility
- Improving Markov network structure learning using decision trees
- scientific article; zbMATH DE number 7306910 (Why is no real title available?)
- Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation
- Iterative maximum likelihood on networks
- Structure learning of contextual Markov networks using marginal pseudo-likelihood
- Learning Markov networks: Maximum bounded tree-width graphs
- Learning Gaussian graphical models with fractional marginal pseudo-likelihood
- High-dimensional structure learning of binary pairwise Markov networks: a comparative numerical study
- Covariate-Assisted Bayesian Graph Learning for Heterogeneous Data
- Support consistency of direct sparse-change learning in Markov networks
This page was built for publication: Marginal pseudo-likelihood learning of discrete Markov network structures
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1699714)