Pages that link to "Item:Q2238680"
From MaRDI portal
The following pages link to Explaining individual predictions when features are dependent: more accurate approximations to Shapley values (Q2238680):
Displaying 15 items.
- PredDiff: explanations and interactions from conditional expectations (Q2093377) (← links)
- Wasserstein-based fairness interpretability framework for machine learning models (Q2102385) (← links)
- Explanation with the winter value: efficient computation for hierarchical Choquet integrals (Q2105574) (← links)
- Relation between prognostics predictor evaluation metrics and local interpretability SHAP values (Q2124455) (← links)
- ESG score prediction through random forest algorithm (Q2155224) (← links)
- Explaining predictive models using Shapley values and non-parametric vine copulas (Q2236381) (← links)
- \( \mathcal{G} \)-LIME: statistical learning for local interpretations of deep neural networks using global priors (Q2680795) (← links)
- (Q5053199) (← links)
- On the Tractability of SHAP Explanations (Q5094036) (← links)
- A \(k\)-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning (Q6067036) (← links)
- Explainable subgradient tree boosting for prescriptive analytics in operations management (Q6087515) (← links)
- A comparative study of methods for estimating model-agnostic Shapley value explanations (Q6609084) (← links)
- On marginal feature attributions of tree-based models (Q6620135) (← links)
- An illustration of model agnostic explainability methods applied to environmental data (Q6626539) (← links)
- Explainable machine learning for financial risk management: two practical use cases (Q6633384) (← links)