Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
From MaRDI portal
Publication:2238680
DOI10.1016/J.ARTINT.2021.103502OpenAlexW3146613606WikidataQ114206176 ScholiaQ114206176MaRDI QIDQ2238680FDOQ2238680
Kjersti Aas, Martin Jullum, Anders Løland
Publication date: 2 November 2021
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.10464
Recommendations
- Explaining predictive models using Shapley values and non-parametric vine copulas
- On Shapley Value for Measuring Importance of Dependent Inputs
- An efficient explanation of individual classifications using game theory
- Local interpretation of supervised learning models based on high dimensional model representation
- Interpretable concept-based classification with Shapley values
Cites Work
- Title not available (Why is that?)
- All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously
- Strictly Proper Scoring Rules, Prediction, and Estimation
- Mixtures of generalized hyperbolic distributions and mixtures of skew-\(t\) distributions for model-based clustering with incomplete data
- Title not available (Why is that?)
- A NEW MEASURE OF RANK CORRELATION
- Remarks on Some Nonparametric Estimates of a Density Function
- Title not available (Why is that?)
- Title not available (Why is that?)
- Causality. Models, reasoning, and inference
- A mixture of generalized hyperbolic distributions
- THE POPULATION FREQUENCIES OF SPECIES AND THE ESTIMATION OF POPULATION PARAMETERS
- Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion
- Monotonic solutions of cooperative games
- Topics in Advanced Econometrics
- Adaptive pointwise estimation of conditional density function
- Fast kernel conditional density estimation: a dual-tree Monte Carlo approach
- Title not available (Why is that?)
- Converting high-dimensional regression to high-dimensional conditional density estimation
- Sobol' Indices and Shapley Value
- Shapley Effects for Global Sensitivity Analysis: Theory and Computation
- On Shapley Value for Measuring Importance of Dependent Inputs
- A generalized Mahalanobis distance for mixed data
- An efficient explanation of individual classifications using game theory
- How to explain individual classification decisions
Cited In (20)
- Explanation with the winter value: efficient computation for hierarchical Choquet integrals
- On the Tractability of SHAP Explanations
- Explaining predictive models using Shapley values and non-parametric vine copulas
- Relation between prognostics predictor evaluation metrics and local interpretability SHAP values
- Interpreting machine-learning models in transformed feature space with an application to remote-sensing classification
- Local interpretation of supervised learning models based on high dimensional model representation
- A comparative study of methods for estimating model-agnostic Shapley value explanations
- A \(k\)-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning
- A General Framework for Inference on Algorithm-Agnostic Variable Importance
- ESG score prediction through random forest algorithm
- On marginal feature attributions of tree-based models
- An illustration of model agnostic explainability methods applied to environmental data
- Title not available (Why is that?)
- Explainable machine learning for financial risk management: two practical use cases
- Explainable subgradient tree boosting for prescriptive analytics in operations management
- \( \mathcal{G} \)-LIME: statistical learning for local interpretations of deep neural networks using global priors
- Explanation of pseudo-Boolean functions using cooperative game theory and prime implicants
- Considerations when learning additive explanations for black-box models
- PredDiff: explanations and interactions from conditional expectations
- Wasserstein-based fairness interpretability framework for machine learning models
Uses Software
This page was built for publication: Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2238680)