innsight
Get the Insights of Your Neural Network
Last update: 21 December 2023
Copyright license: MIT license, File License
Software version identifier: 0.2.0, 0.1.0, 0.1.1, 0.3.0
Interpretation methods for analyzing the behavior and individual predictions of modern neural networks in a three-step procedure: Converting the model, running the interpretation method, and visualizing the results. Implemented methods are, e.g., 'Connection Weights' described by Olden et al. (2004) <doi:10.1016/j.ecolmodel.2004.03.013>, layer-wise relevance propagation ('LRP') described by Bach et al. (2015) <doi:10.1371/journal.pone.0130140>, deep learning important features ('DeepLIFT') described by Shrikumar et al. (2017) <arXiv:1704.02685> and gradient-based methods like 'SmoothGrad' described by Smilkov et al. (2017) <arXiv:1706.03825>, 'Gradient x Input' described by Baehrens et al. (2009) <arXiv:0912.1128> or 'Vanilla Gradient'.
- An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data
- On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
- Learning Important Features Through Propagating Activation Differences
- SmoothGrad: removing noise by adding noise
- How to Explain Individual Classification Decisions
This page was built for software: innsight