On computing probabilistic abductive explanations
From MaRDI portal
Publication:6116531
DOI10.1016/J.IJAR.2023.108939arXiv2212.05990MaRDI QIDQ6116531
X. X. Huang, Joao Marques-Silva, Nina Narodytska, M. C. Cooper, Yacine Izza, A. A. Ignatiev
Publication date: 18 July 2023
Published in: International Journal of Approximate Reasoning (Search for Journal in Brave)
Abstract: The most widely studied explainable AI (XAI) approaches are unsound. This is the case with well-known model-agnostic explanation approaches, and it is also the case with approaches based on saliency maps. One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness. Unfortunately, intrinsic interpretability can display unwieldy explanation redundancy. Formal explainability represents the alternative to these non-rigorous approaches, with one example being PI-explanations. Unfortunately, PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size. Recently, it has been observed that the (absolute) rigor of PI-explanations can be traded off for a smaller explanation size, by computing the so-called relevant sets. Given some positive {delta}, a set S of features is {delta}-relevant if, when the features in S are fixed, the probability of getting the target class exceeds {delta}. However, even for very simple classifiers, the complexity of computing relevant sets of features is prohibitive, with the decision problem being NPPP-complete for circuit-based classifiers. In contrast with earlier negative results, this paper investigates practical approaches for computing relevant sets for a number of widely used classifiers that include Decision Trees (DTs), Naive Bayes Classifiers (NBCs), and several families of classifiers obtained from propositional languages. Moreover, the paper shows that, in practice, and for these families of classifiers, relevant sets are easy to compute. Furthermore, the experiments confirm that succinct sets of relevant features can be obtained for the families of classifiers considered.
Full work available at URL: https://arxiv.org/abs/2212.05990
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Graph-Based Algorithms for Boolean Function Manipulation
- Minimal sets on propositional formulae. Problems and reductions
- Computational Complexity
- Bayesian network classifiers
- An FPTAS for #Knapsack and Related Counting Problems
- The complexity of theorem-proving procedures
- Fast, flexible MUS enumeration
- On Computing Preferred MUSes and MCSes
- A theory of the learnable
- On the Tractable Counting of Theory Models and its Application to Truth Maintenance and Belief Revision
- Explanations, belief revision and defeasible reasoning.
- Approximate counting by dynamic programming
- Decomposable negation normal form
- Consistency restoration and explanations in dynamic CSPs---Application to configuration
- Bayesian Reasoning and Machine Learning
- Decision tree induction based on efficient tree restructuring
- On Tackling Explanation Redundancy in Decision Trees
- Explanation in artificial intelligence: insights from the social sciences
- Optimal classification trees
- Assessing heuristic machine learning explanations with model counting
- The Computational Complexity of Understanding Binary Classifier Decisions
- SAT-based rigorous explanations for decision lists
- A logic for binary classifiers and their explanation
- Faster FPTASes for counting and random generation of knapsack solutions
- A Faster FPTAS for #Knapsack
- Non-monotonic explanation functions
Cited In (9)
- Representation theorems for explanatory reasoning based on cumulative models
- Explanation-based generalisation \(=\) partial evaluation
- On the failings of Shapley values for explainability
- Towards formal XAI: formally approximate minimal explanations of neural networks
- Foundations of Information and Knowledge Systems
- Symbolic and Quantitative Approaches to Reasoning with Uncertainty
- Relaxed Abduction
- Explanations as programs in probabilistic logic programming
- Abductive explanation-based learning: A solution to the multiple inconsistent explanation problem
This page was built for publication: On computing probabilistic abductive explanations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6116531)