The following pages link to shap (Q42615):
Displayed 50 items.
- Testing conditional independence in supervised learning algorithms (Q113672) (← links)
- Mathematical optimization in classification and regression trees (Q828748) (← links)
- On importance indices in multicriteria decision making (Q1735191) (← links)
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C (Q2022488) (← links)
- A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (Q2026298) (← links)
- Consistent regression using data-dependent coverings (Q2044358) (← links)
- Clustering patterns connecting COVID-19 dynamics and human mobility using optimal transport (Q2047386) (← links)
- A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds (Q2057739) (← links)
- Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance (Q2066736) (← links)
- Toward an explainable machine learning model for claim frequency: a use case in car insurance pricing with telematics data (Q2066785) (← links)
- Assessment of the influence of features on a classification problem: an application to COVID-19 patients (Q2077933) (← links)
- A neural network ensemble approach for GDP forecasting (Q2115947) (← links)
- SAT-based rigorous explanations for decision lists (Q2118305) (← links)
- Relation between prognostics predictor evaluation metrics and local interpretability SHAP values (Q2124455) (← links)
- Uncertainty quantification for data-driven turbulence modelling with Mondrian forests (Q2124898) (← links)
- Non-technical losses detection in energy consumption focusing on energy recovery and explainability (Q2127244) (← links)
- INK: knowledge graph embeddings for node classification (Q2134043) (← links)
- Explainable models of credit losses (Q2140185) (← links)
- Explanation with the Winter value: efficient computation for hierarchical Choquet integrals (Q2146048) (← links)
- Scrutinizing XAI using linear ground-truth data with suppressor variables (Q2163233) (← links)
- An exploration of combinatorial testing-based approaches to fault localization for explainable AI (Q2163858) (← links)
- Propositionalization and embeddings: two sides of the same coin (Q2203327) (← links)
- Cooperative games on simplicial complexes (Q2208366) (← links)
- Grafting for combinatorial binary model using frequent itemset mining (Q2218401) (← links)
- A recommendation system for car insurance (Q2219620) (← links)
- Using ontologies to enhance human understandability of global post-hoc explanations of black-box models (Q2238570) (← links)
- Show or suppress? Managing input uncertainty in machine learning model explanations (Q2238626) (← links)
- GLocalX -- from local to global explanations of black box AI models (Q2238629) (← links)
- Explaining black-box classifiers using \textit{post-hoc} explanations-by-example: the effect of explanations and error-rates in XAI user studies (Q2238633) (← links)
- Evaluating local explanation methods on ground truth (Q2238660) (← links)
- Spatial relation learning for explainable image classification and annotation in critical applications (Q2238673) (← links)
- Embedding deep networks into visual explanations (Q2238677) (← links)
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values (Q2238680) (← links)
- ``That's (not) the output I expected!'' On the role of end user e I ctations in creating explanations of AI systems (Q2238691) (← links)
- A framework for step-wise explaining how to solve constraint satisfaction problems (Q2238723) (← links)
- Deep learning for credit scoring: do or don't? (Q2239871) (← links)
- To imprison or not to imprison: an analytics model for drug courts (Q2241165) (← links)
- A turbulent eddy-viscosity surrogate modeling framework for Reynolds-averaged Navier-Stokes simulations (Q2245422) (← links)
- A game-based approximate verification of deep neural networks with provable guarantees (Q2286751) (← links)
- On sparse optimal regression trees (Q2670540) (← links)
- Explaining Hierarchical Multi-linear Models (Q3297809) (← links)
- (Q4969233) (← links)
- (Q4999041) (← links)
- (Q4999101) (← links)
- Feature Selection Based on Shapley Additive Explanations on Metagenomic Data for Colorectal Cancer Diagnosis (Q5014553) (← links)
- (Q5020582) (← links)
- Learning Optimal Decision Sets and Lists with SAT (Q5026234) (← links)
- Explainable Deep Learning: A Field Guide for the Uninitiated (Q5026262) (← links)
- Interpreted machine learning in fluid dynamics: explaining relaminarisation events in wall-bounded shear flows (Q5077267) (← links)
- Interactive Slice Visualization for Exploring Machine Learning Models (Q5083345) (← links)