Discovering a junction tree behind a Markov network by a greedy algorithm
From MaRDI portal
Publication:402234
Abstract: In an earlier paper we introduced a special kind of k-width junction tree, called k-th order t-cherry junction tree in order to approximate a joint probability distribution. The approximation is the best if the Kullback-Leibler divergence between the true joint probability distribution and the approximating one is minimal. Finding the best approximating k-width junction tree is NP-complete if k>2. In our earlier paper we also proved that the best approximating k-width junction tree can be embedded into a k-th order t-cherry junction tree. We introduce a greedy algorithm resulting very good approximations in reasonable computing time. In this paper we prove that if the Markov network underlying fullfills some requirements then our greedy algorithm is able to find the true probability distribution or its best approximation in the family of the k-th order t-cherry tree probability distributions. Our algorithm uses just the k-th order marginal probability distributions as input. We compare the results of the greedy algorithm proposed in this paper with the greedy algorithm proposed by Malvestuto in 1991.
Recommendations
- Hypergraphs as a mean of discovering the dependence structure of a discrete multivariate probability distribution
- Maximum likelihood bounded tree-width Markov networks
- Approximating discrete probability distributions with dependence trees
- A sufficiently fast algorithm for finding close to optimal clique trees
- A New Look at the Generalized Distributive Law
Cites work
- scientific article; zbMATH DE number 3758364 (Why is no real title available?)
- scientific article; zbMATH DE number 107482 (Why is no real title available?)
- scientific article; zbMATH DE number 3519741 (Why is no real title available?)
- scientific article; zbMATH DE number 4121482 (Why is no real title available?)
- scientific article; zbMATH DE number 3241743 (Why is no real title available?)
- A microscopic study of minimum entropy search in learning decomposable Markov networks
- A backward selection procedure for approximating a discrete probability distribution by decomposable models
- A probabilistic classification method based on conditional independences
- Analogies between Multiplicative Models in Contingency Tables and Covariance Selection
- Approximating discrete probability distributions with dependence trees
- Computing the Minimum Fill-In is NP-Complete
- Hypergraphs as a mean of discovering the dependence structure of a discrete multivariate probability distribution
- Learning Markov networks: Maximum bounded tree-width graphs
- Model Search among Multiplicative Models
- Pattern recognition using \(t\)-cherry junction tree structures
- Probability bounds given by hypercherry trees
- Probability bounds with cherry trees.
- Simple Linear-Time Algorithms to Test Chordality of Graphs, Test Acyclicity of Hypergraphs, and Selectively Reduce Acyclic Hypergraphs
Cited in
(5)- Hypergraphs as a mean of discovering the dependence structure of a discrete multivariate probability distribution
- Efficient approximation of probability distributions with \(k\)-order decomposable models
- A backward selection procedure for approximating a discrete probability distribution by decomposable models
- A New Look at the Generalized Distributive Law
- Pattern recognition using \(t\)-cherry junction tree structures
This page was built for publication: Discovering a junction tree behind a Markov network by a greedy algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q402234)