shap
From MaRDI portal
Software:42615
swMATH30901MaRDI QIDQ42615FDOQ42615
Author name not available (Why is that?)
Source code repository: https://github.com/slundberg/shap
Cited In (59)
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Grafting for combinatorial binary model using frequent itemset mining
- Explanation with the Winter value: efficient computation for hierarchical Choquet integrals
- Mathematical optimization in classification and regression trees
- A recommendation system for car insurance
- Stochastic Tree Search for Estimating Optimal Dynamic Treatment Regimes
- A neural network ensemble approach for GDP forecasting
- On importance indices in multicriteria decision making
- Explainable Deep Learning: A Field Guide for the Uninitiated
- A Survey on the Explainability of Supervised Machine Learning
- Relation between prognostics predictor evaluation metrics and local interpretability SHAP values
- Explainable models of credit losses
- Learning Optimal Decision Sets and Lists with SAT
- ``That's (not) the output I expected! On the role of end user e I ctations in creating explanations of AI systems
- Explaining black-box classifiers using \textit{post-hoc} explanations-by-example: the effect of explanations and error-rates in XAI user studies
- GLocalX -- from local to global explanations of black box AI models
- Evaluating local explanation methods on ground truth
- Embedding deep networks into visual explanations
- Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
- Show or suppress? Managing input uncertainty in machine learning model explanations
- Spatial relation learning for explainable image classification and annotation in critical applications
- Non-technical losses detection in energy consumption focusing on energy recovery and explainability
- To imprison or not to imprison: an analytics model for drug courts
- Uncertainty quantification for data-driven turbulence modelling with Mondrian forests
- Explaining Hierarchical Multi-linear Models
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
- A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability
- Title not available (Why is that?)
- INK: knowledge graph embeddings for node classification
- Cooperative games on simplicial complexes
- A game-based approximate verification of deep neural networks with provable guarantees
- Feature Selection Based on Shapley Additive Explanations on Metagenomic Data for Colorectal Cancer Diagnosis
- Title not available (Why is that?)
- Interactive Slice Visualization for Exploring Machine Learning Models
- Consistent regression using data-dependent coverings
- Title not available (Why is that?)
- Scrutinizing XAI using linear ground-truth data with suppressor variables
- An exploration of combinatorial testing-based approaches to fault localization for explainable AI
- A framework for step-wise explaining how to solve constraint satisfaction problems
- Clustering patterns connecting COVID-19 dynamics and human mobility using optimal transport
- Title not available (Why is that?)
- Title not available (Why is that?)
- Propositionalization and embeddings: two sides of the same coin
- A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds
- SAT-based rigorous explanations for decision lists
- A turbulent eddy-viscosity surrogate modeling framework for Reynolds-averaged Navier-Stokes simulations
- Deep learning for credit scoring: do or don't?
- Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance
- Toward an explainable machine learning model for claim frequency: a use case in car insurance pricing with telematics data
- Counterfactual Explanation of Machine Learning Survival Models
- Testing conditional independence in supervised learning algorithms
- Assessment of the influence of features on a classification problem: an application to COVID-19 patients
- On sparse optimal regression trees
- On the Tractability of SHAP Explanations
- Title not available (Why is that?)
- Shapley Homology: Topological Analysis of Sample Influence for Neural Networks
- White-box Induction From SVM Models: Explainable AI with Logic Programming
- Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
- Interpreted machine learning in fluid dynamics: explaining relaminarisation events in wall-bounded shear flows
This page was built for software: shap