Use of explanation trees to describe the state space of a probabilistic-based abduction problem
From MaRDI portal
Publication:3562273
Recommendations
- Symbolic and Quantitative Approaches to Reasoning with Uncertainty
- Most Inforbable Explanations: Finding Explanations in Bayesian Networks That Are Both Probable and Informative
- SIMPLIFYING EXPLANATIONS IN BAYESIAN BELIEF NETWORKS
- Cost-based abduction and MAP explanation
- Accelerating chromosome evaluation for partial abductive inference in Bayesian networks by means of explanation set absorption
Cites work
- scientific article; zbMATH DE number 1634667 (Why is no real title available?)
- scientific article; zbMATH DE number 1735976 (Why is no real title available?)
- scientific article; zbMATH DE number 2243356 (Why is no real title available?)
- A Probabilistic Causal Model for Diagnostic Problem Solving Part I: Integrating Symbolic Causal Inference with Numeric Probabilistic Inference
- A theory of diagnosis from first principles
- Binary join trees for computing marginals in the Shenoy-Shafer architecture
- Diagnosing multiple faults
- Finding MAPs for belief networks is NP-hard
- Importance sampling in Bayesian networks using probability trees.
- SIMPLIFYING EXPLANATIONS IN BAYESIAN BELIEF NETWORKS
- Symbolic and Quantitative Approaches to Reasoning with Uncertainty
- The role of relevance in explanation. I: Irrelevance as statistical independence
Cited in
(3)
This page was built for publication: Use of explanation trees to describe the state space of a probabilistic-based abduction problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3562273)