Use of Explanation Trees to Describe the State Space of a Probabilistic-Based Abduction Problem
From MaRDI portal
Publication:3562273
DOI10.1007/978-3-540-85066-3_10zbMath1187.68605OpenAlexW2162836051MaRDI QIDQ3562273
M. Julia Flores, José A. Gámez, Serafín Moral
Publication date: 21 May 2010
Published in: Innovations in Bayesian Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-540-85066-3_10
Reasoning under uncertainty in the context of artificial intelligence (68T37) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Diagnosing multiple faults
- A theory of diagnosis from first principles
- Finding MAPs for belief networks is NP-hard
- Importance sampling in Bayesian networks using probability trees.
- Binary join trees for computing marginals in the Shenoy-Shafer architecture
- The role of relevance in explanation. I: Irrelevance as statistical independence
- SIMPLIFYING EXPLANATIONS IN BAYESIAN BELIEF NETWORKS
- A Probabilistic Causal Model for Diagnostic Problem Solving Part I: Integrating Symbolic Causal Inference with Numeric Probabilistic Inference
- Symbolic and Quantitative Approaches to Reasoning with Uncertainty
This page was built for publication: Use of Explanation Trees to Describe the State Space of a Probabilistic-Based Abduction Problem