shap
From MaRDI portal
Shap
Cited in
(only showing first 100 items - show all)- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Explaining hierarchical multi-linear models
- Counterfactual explanation of machine learning survival models
- Mathematical optimization in classification and regression trees
- Grafting for combinatorial binary model using frequent itemset mining
- Explanation with the Winter value: efficient computation for hierarchical Choquet integrals
- On the Tractability of SHAP Explanations
- scientific article; zbMATH DE number 7407794 (Why is no real title available?)
- A recommendation system for car insurance
- A neural network ensemble approach for GDP forecasting
- On importance indices in multicriteria decision making
- Relation between prognostics predictor evaluation metrics and local interpretability SHAP values
- Explainable models of credit losses
- Learning Optimal Decision Sets and Lists with SAT
- Non-technical losses detection in energy consumption focusing on energy recovery and explainability
- ``That's (not) the output I expected! On the role of end user e I ctations in creating explanations of AI systems
- Explaining black-box classifiers using \textit{post-hoc} explanations-by-example: the effect of explanations and error-rates in XAI user studies
- GLocalX -- from local to global explanations of black box AI models
- Evaluating local explanation methods on ground truth
- Embedding deep networks into visual explanations
- Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
- Show or suppress? Managing input uncertainty in machine learning model explanations
- Spatial relation learning for explainable image classification and annotation in critical applications
- To imprison or not to imprison: an analytics model for drug courts
- Uncertainty quantification for data-driven turbulence modelling with Mondrian forests
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
- A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability
- INK: knowledge graph embeddings for node classification
- Cooperative games on simplicial complexes
- scientific article; zbMATH DE number 7566076 (Why is no real title available?)
- A game-based approximate verification of deep neural networks with provable guarantees
- Stochastic tree search for estimating optimal dynamic treatment regimes
- DEA
- SOLAR
- CP-nets
- BeepBeep
- G*Power 3
- FORS
- LOLIMOT
- kappalab
- JBool
- HeuristicLab
- ranger
- Boruta
- Scikit
- relaimpo
- CASdatasets
- GENCOL
- maSigPro
- AMUSE
- Orange
- Aleph
- MUP
- GDAdata
- ProGolem
- TCGAretriever
- xgboost
- qtlbim
- ROSE
- HYCOM
- AdaBoost-SAMME
- LOF
- RDFLib
- CARTscans
- condvis
- ggRandomForests
- flacco
- Temporal_Eigenvector_Centrality
- XGBoost
- OpenSeesPy
- VIKAMINE
- ZOOpt
- LDAvis
- H2O
- PySAT
- DALEX
- Theoryguru
- SALib
- conformalInference
- DREAM.3D
- RAVE
- DEX
- HINT
- breakDown
- live
- LightGBM
- parsnip
- ORL
- SegMine
- PMLB
- QUICKXPLAIN
- iml
- Eureqa
- FEND
- CAMERA
- Caltech-UCSD Birds
- iBreakDown
- InterpretML
- lime
- modelStudio
This page was built for software: shap