Use of explanation trees to describe the state space of a probabilistic-based abduction problem
DOI10.1007/978-3-540-85066-3_10zbMATH Open1187.68605OpenAlexW2162836051MaRDI QIDQ3562273FDOQ3562273
Authors: M. Julia Flores, José A. Gámez, Serafin Moral
Publication date: 21 May 2010
Published in: Innovations in Bayesian Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-540-85066-3_10
Recommendations
- Symbolic and Quantitative Approaches to Reasoning with Uncertainty
- Most Inforbable Explanations: Finding Explanations in Bayesian Networks That Are Both Probable and Informative
- SIMPLIFYING EXPLANATIONS IN BAYESIAN BELIEF NETWORKS
- Cost-based abduction and MAP explanation
- Accelerating chromosome evaluation for partial abductive inference in Bayesian networks by means of explanation set absorption
Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Reasoning under uncertainty in the context of artificial intelligence (68T37)
Cites Work
- Importance sampling in Bayesian networks using probability trees.
- Title not available (Why is that?)
- Diagnosing multiple faults
- A theory of diagnosis from first principles
- Finding MAPs for belief networks is NP-hard
- Title not available (Why is that?)
- Binary join trees for computing marginals in the Shenoy-Shafer architecture
- SIMPLIFYING EXPLANATIONS IN BAYESIAN BELIEF NETWORKS
- A Probabilistic Causal Model for Diagnostic Problem Solving Part I: Integrating Symbolic Causal Inference with Numeric Probabilistic Inference
- The role of relevance in explanation. I: Irrelevance as statistical independence
- Symbolic and Quantitative Approaches to Reasoning with Uncertainty
- Title not available (Why is that?)
Cited In (3)
This page was built for publication: Use of explanation trees to describe the state space of a probabilistic-based abduction problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3562273)