Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
From MaRDI portal
Publication:2238680
DOI10.1016/j.artint.2021.103502OpenAlexW3146613606WikidataQ114206176 ScholiaQ114206176MaRDI QIDQ2238680
Kjersti Aas, Martin Jullum, Anders Løland
Publication date: 2 November 2021
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.10464
Related Items (13)
Relation between prognostics predictor evaluation metrics and local interpretability SHAP values ⋮ ESG score prediction through random forest algorithm ⋮ On the Tractability of SHAP Explanations ⋮ A \(k\)-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning ⋮ A General Framework for Inference on Algorithm-Agnostic Variable Importance ⋮ Explainable subgradient tree boosting for prescriptive analytics in operations management ⋮ \( \mathcal{G} \)-LIME: statistical learning for local interpretations of deep neural networks using global priors ⋮ Explanation of pseudo-Boolean functions using cooperative game theory and prime implicants ⋮ Explaining predictive models using Shapley values and non-parametric vine copulas ⋮ Unnamed Item ⋮ PredDiff: explanations and interactions from conditional expectations ⋮ Wasserstein-based fairness interpretability framework for machine learning models ⋮ Explanation with the winter value: efficient computation for hierarchical Choquet integrals
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously
- Adaptive pointwise estimation of conditional density function
- Monotonic solutions of cooperative games
- Mixtures of generalized hyperbolic distributions and mixtures of skew-\(t\) distributions for model-based clustering with incomplete data
- Converting high-dimensional regression to high-dimensional conditional density estimation
- Fast kernel conditional density estimation: a dual-tree Monte Carlo approach
- A generalized Mahalanobis distance for mixed data
- Sobol' Indices and Shapley Value
- Shapley Effects for Global Sensitivity Analysis: Theory and Computation
- Remarks on Some Nonparametric Estimates of a Density Function
- Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion
- Topics in Advanced Econometrics
- On Shapley Value for Measuring Importance of Dependent Inputs
- A mixture of generalized hyperbolic distributions
- Strictly Proper Scoring Rules, Prediction, and Estimation
- A NEW MEASURE OF RANK CORRELATION
- THE POPULATION FREQUENCIES OF SPECIES AND THE ESTIMATION OF POPULATION PARAMETERS
This page was built for publication: Explaining individual predictions when features are dependent: more accurate approximations to Shapley values