Interpretation of black-box predictive models
From MaRDI portal
Publication:2805733
DOI10.1007/978-3-319-21852-6_19zbMATH Open1336.68213OpenAlexW2299435411MaRDI QIDQ2805733FDOQ2805733
Authors: Vladimir Cherkassky, Sauptik Dhar
Publication date: 13 May 2016
Published in: Measures of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-319-21852-6_19
Recommendations
Cites Work
- Title not available (Why is that?)
- Gene selection for cancer classification using support vector machines
- Title not available (Why is that?)
- Statistical modeling: The two cultures. (With comments and a rejoinder).
- Title not available (Why is that?)
- Classifier technology and the illusion of progress
- The Logic of Inductive Inference
- Rule extraction from support vector machines.
- Title not available (Why is that?)
- The maximal data piling direction for discrimination
Cited In (9)
- Discovery Science
- Interpreting machine-learning models in transformed feature space with an application to remote-sensing classification
- Local interpretation of supervised learning models based on high dimensional model representation
- Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models
- Techniques to improve ecological interpretability of black-box machine learning models. Case study on biological health of streams in the United States with gradient boosted trees
- How to explain individual classification decisions
- SLISEMAP: supervised dimensionality reduction through local explanations
- Parameter identifiability in statistical machine learning: a review
- Structural modelling with sparse kernels
Uses Software
This page was built for publication: Interpretation of black-box predictive models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2805733)