An efficient explanation of individual classifications using game theory
From MaRDI portal
Publication:2896017
Recommendations
- How to explain individual classification decisions
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Interpretable concept-based classification with Shapley values
- scientific article; zbMATH DE number 1934575
- A Symbolic Approach for Counterfactual Explanations
Cited in
(33)- PredDiff: explanations and interactions from conditional expectations
- Comments on ``Data science, big data and statistics
- Necessary players and values
- A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds
- Explaining predictive models using Shapley values and non-parametric vine copulas
- An illustration of model agnostic explainability methods applied to environmental data
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
- Wasserstein-based fairness interpretability framework for machine learning models
- Explanation with the Winter value: efficient computation for hierarchical Choquet integrals
- A comparative study of methods for estimating model-agnostic Shapley value explanations
- Non-linear dimension reduction in factor-augmented vector autoregressions
- Explanation of pseudo-Boolean functions using cooperative game theory and prime implicants
- A \(k\)-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning
- Foundations of fine-grained explainability
- Interpretable concept-based classification with Shapley values
- Explaining machine learning models using entropic variable projection
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Assessment of the influence of features on a classification problem: an application to COVID-19 patients
- scientific article; zbMATH DE number 1934575 (Why is no real title available?)
- On the failings of Shapley values for explainability
- How to explain individual classification decisions
- Local interpretation of supervised learning models based on high dimensional model representation
- Counterfactual explanation of machine learning survival models
- A theory of dichotomous valuation with applications to variable selection
- Improving regression predictions using individual point reliability estimates based on critical error scenarios
- Explanation with the winter value: efficient computation for hierarchical Choquet integrals
- scientific article; zbMATH DE number 7307487 (Why is no real title available?)
- Variable importance evaluation with personalized odds ratio for machine learning model interpretability with applications to electronic health records-based mortality prediction
- The computational complexity of understanding binary classifier decisions
- Variance reduced Shapley value estimation for trustworthy data valuation
- scientific article; zbMATH DE number 7370621 (Why is no real title available?)
- scientific article; zbMATH DE number 7626724 (Why is no real title available?)
- scientific article; zbMATH DE number 7625196 (Why is no real title available?)
This page was built for publication: An efficient explanation of individual classifications using game theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2896017)