How to explain individual classification decisions
From MaRDI portal
Publication:2896115
Recommendations
Cited in
(36)- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Foundations of fine-grained explainability
- Sharing hash codes for multiple purposes
- SIRUS: stable and interpretable RUle set for classification
- Explanation in artificial intelligence: insights from the social sciences
- On the reasons behind decisions
- Learning Optimal Decision Sets and Lists with SAT
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
- Explicative deep learning with probabilistic formal concepts in a natural language processing task
- Assessing heuristic machine learning explanations with model counting
- A framework for inherently interpretable optimization models
- Comments on ``Data science, big data and statistics
- Learning with rationales for document classification
- scientific article; zbMATH DE number 7307487 (Why is no real title available?)
- scientific article; zbMATH DE number 1934575 (Why is no real title available?)
- Describing the result of a classifier to the end-user: geometric-based sensitivity
- Interpretation of black-box predictive models
- Model-agnostic explanations for survival prediction models
- Scrutinizing XAI using linear ground-truth data with suppressor variables
- An efficient explanation of individual classifications using game theory
- Variable importance evaluation with personalized odds ratio for machine learning model interpretability with applications to electronic health records-based mortality prediction
- A survey on the explainability of supervised machine learning
- Explainable deep learning: a field guide for the uninitiated
- Conclusive local interpretation rules for random forests
- Rationalizing predictions by adversarial information calibration
- SLISEMAP: supervised dimensionality reduction through local explanations
- Temporal inductive path neural network for temporal knowledge graph reasoning
- Explaining AI decisions using efficient methods for learning sparse Boolean formulae
- Backtransformation: a new representation of data processing chains with a scalar decision function
- Constrained dynamics, stochastic numerical methods and the modeling of complex systems. Abstracts from the workshop held May 26--31, 2024
- Feature necessity \& relevancy in ML classifier explanations
- The LRP toolbox for artificial neural networks
- A Symbolic Approach for Counterfactual Explanations
- Considerations when learning additive explanations for black-box models
- A new class of explanations for classifiers with non-binary features
- Explaining machine learning models using entropic variable projection
This page was built for publication: How to explain individual classification decisions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2896115)