An efficient explanation of individual classifications using game theory
From MaRDI portal
Publication:2896017
zbMATH Open1242.68250MaRDI QIDQ2896017FDOQ2896017
Authors: Erik Štrumbelj, Igor Kononenko
Publication date: 13 July 2012
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Full work available at URL: http://www.jmlr.org/papers/v11/strumbelj10a.html
Recommendations
- How to explain individual classification decisions
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Interpretable concept-based classification with Shapley values
- scientific article; zbMATH DE number 1934575
- A Symbolic Approach for Counterfactual Explanations
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Cooperative games (91A12)
Cited In (33)
- Non-linear dimension reduction in factor-augmented vector autoregressions
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Counterfactual explanation of machine learning survival models
- Explanation with the winter value: efficient computation for hierarchical Choquet integrals
- The computational complexity of understanding binary classifier decisions
- Explanation with the Winter value: efficient computation for hierarchical Choquet integrals
- Foundations of fine-grained explainability
- On the failings of Shapley values for explainability
- A theory of dichotomous valuation with applications to variable selection
- Explaining predictive models using Shapley values and non-parametric vine copulas
- Variance reduced Shapley value estimation for trustworthy data valuation
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
- Comments on ``Data science, big data and statistics
- Title not available (Why is that?)
- Title not available (Why is that?)
- Local interpretation of supervised learning models based on high dimensional model representation
- Improving regression predictions using individual point reliability estimates based on critical error scenarios
- A comparative study of methods for estimating model-agnostic Shapley value explanations
- A \(k\)-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning
- Interpretable concept-based classification with Shapley values
- An illustration of model agnostic explainability methods applied to environmental data
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Variable importance evaluation with personalized odds ratio for machine learning model interpretability with applications to electronic health records-based mortality prediction
- A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds
- How to explain individual classification decisions
- Explanation of pseudo-Boolean functions using cooperative game theory and prime implicants
- Assessment of the influence of features on a classification problem: an application to COVID-19 patients
- Explaining machine learning models using entropic variable projection
- PredDiff: explanations and interactions from conditional expectations
- Necessary players and values
- Wasserstein-based fairness interpretability framework for machine learning models
This page was built for publication: An efficient explanation of individual classifications using game theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2896017)