How to explain individual classification decisions
From MaRDI portal
Publication:2896115
zbMATH Open1242.62049MaRDI QIDQ2896115FDOQ2896115
Authors: David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Müller
Publication date: 13 July 2012
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Full work available at URL: http://www.jmlr.org/papers/v11/baehrens10a.html
Recommendations
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Cited In (36)
- Explaining individual predictions when features are dependent: more accurate approximations to Shapley values
- Sharing hash codes for multiple purposes
- Foundations of fine-grained explainability
- SIRUS: stable and interpretable RUle set for classification
- Explanation in artificial intelligence: insights from the social sciences
- On the reasons behind decisions
- Learning Optimal Decision Sets and Lists with SAT
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
- A framework for inherently interpretable optimization models
- Explicative deep learning with probabilistic formal concepts in a natural language processing task
- Comments on ``Data science, big data and statistics
- Assessing heuristic machine learning explanations with model counting
- Title not available (Why is that?)
- Learning with rationales for document classification
- Title not available (Why is that?)
- Describing the result of a classifier to the end-user: geometric-based sensitivity
- Model-agnostic explanations for survival prediction models
- Interpretation of black-box predictive models
- An efficient explanation of individual classifications using game theory
- Scrutinizing XAI using linear ground-truth data with suppressor variables
- Variable importance evaluation with personalized odds ratio for machine learning model interpretability with applications to electronic health records-based mortality prediction
- A survey on the explainability of supervised machine learning
- Explainable deep learning: a field guide for the uninitiated
- Conclusive local interpretation rules for random forests
- Rationalizing predictions by adversarial information calibration
- SLISEMAP: supervised dimensionality reduction through local explanations
- Temporal inductive path neural network for temporal knowledge graph reasoning
- Explaining AI decisions using efficient methods for learning sparse Boolean formulae
- Constrained dynamics, stochastic numerical methods and the modeling of complex systems. Abstracts from the workshop held May 26--31, 2024
- Backtransformation: a new representation of data processing chains with a scalar decision function
- Feature necessity \& relevancy in ML classifier explanations
- A Symbolic Approach for Counterfactual Explanations
- The LRP toolbox for artificial neural networks
- Considerations when learning additive explanations for black-box models
- A new class of explanations for classifiers with non-binary features
- Explaining machine learning models using entropic variable projection
This page was built for publication: How to explain individual classification decisions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2896115)