Interpretation of black-box predictive models
From MaRDI portal
Publication:2805733
Recommendations
Cites work
- scientific article; zbMATH DE number 1497419 (Why is no real title available?)
- scientific article; zbMATH DE number 1843268 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- Classifier technology and the illusion of progress
- Gene selection for cancer classification using support vector machines
- Rule extraction from support vector machines.
- Statistical modeling: The two cultures. (With comments and a rejoinder).
- The Logic of Inductive Inference
- The maximal data piling direction for discrimination
Cited in
(9)- Discovery Science
- Techniques to improve ecological interpretability of black-box machine learning models. Case study on biological health of streams in the United States with gradient boosted trees
- Structural modelling with sparse kernels
- Interpreting machine-learning models in transformed feature space with an application to remote-sensing classification
- How to explain individual classification decisions
- Local interpretation of supervised learning models based on high dimensional model representation
- Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models
- SLISEMAP: supervised dimensionality reduction through local explanations
- Parameter identifiability in statistical machine learning: a review
This page was built for publication: Interpretation of black-box predictive models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2805733)