Discovering a junction tree behind a Markov network by a greedy algorithm
DOI10.1007/S11081-013-9232-8zbMATH Open1294.65013arXiv1104.2762OpenAlexW1530670205MaRDI QIDQ402234FDOQ402234
Authors: Tamás Szántai, Edith Kovács
Publication date: 27 August 2014
Published in: Optimization and Engineering (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1104.2762
Recommendations
- Hypergraphs as a mean of discovering the dependence structure of a discrete multivariate probability distribution
- Maximum likelihood bounded tree-width Markov networks
- Approximating discrete probability distributions with dependence trees
- A sufficiently fast algorithm for finding close to optimal clique trees
- A New Look at the Generalized Distributive Law
contingency tablegraphical modelsconditional independencegreedy algorithmCherry tree probability distributionMarkov networktriangulated graph
Numerical optimization and variational techniques (65K10) Programming involving graphs or networks (90C35)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Approximating discrete probability distributions with dependence trees
- Title not available (Why is that?)
- Title not available (Why is that?)
- Hypergraphs as a mean of discovering the dependence structure of a discrete multivariate probability distribution
- Learning Markov networks: Maximum bounded tree-width graphs
- A backward selection procedure for approximating a discrete probability distribution by decomposable models
- Simple Linear-Time Algorithms to Test Chordality of Graphs, Test Acyclicity of Hypergraphs, and Selectively Reduce Acyclic Hypergraphs
- Computing the Minimum Fill-In is NP-Complete
- Probability bounds with cherry trees.
- A probabilistic classification method based on conditional independences
- Pattern recognition using \(t\)-cherry junction tree structures
- Model Search among Multiplicative Models
- Analogies between Multiplicative Models in Contingency Tables and Covariance Selection
- Probability bounds given by hypercherry trees
- A microscopic study of minimum entropy search in learning decomposable Markov networks
Cited In (5)
- Efficient approximation of probability distributions with \(k\)-order decomposable models
- Pattern recognition using \(t\)-cherry junction tree structures
- A backward selection procedure for approximating a discrete probability distribution by decomposable models
- Hypergraphs as a mean of discovering the dependence structure of a discrete multivariate probability distribution
- A New Look at the Generalized Distributive Law
This page was built for publication: Discovering a junction tree behind a Markov network by a greedy algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q402234)