The following pages link to Grad-CAM (Q46807):
Displaying 24 items.
- (Q50801) (redirect page) (← links)
- End-to-end deep representation learning for time series clustering: a comparative study (Q832639) (← links)
- Learning localized features in 3D CAD models for manufacturability analysis of drilled holes (Q1644399) (← links)
- A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (Q2026298) (← links)
- Explainable time-frequency convolutional neural network for microseismic waveform classification (Q2055569) (← links)
- Understanding adversarial robustness via critical attacking route (Q2056311) (← links)
- Multi-resolution 3D CNN for learning multi-scale spatial features in CAD models (Q2065636) (← links)
- Joint and individual analysis of breast cancer histologic images and genomic covariates (Q2078283) (← links)
- Black-box adversarial attacks by manipulating image attributes (Q2123528) (← links)
- Sensitive loss: improving accuracy and fairness of face representations with discrimination-aware deep learning (Q2124451) (← links)
- Relation between prognostics predictor evaluation metrics and local interpretability SHAP values (Q2124455) (← links)
- What can we learn from telematics car driving data: a survey (Q2138624) (← links)
- Multi-modal genotype and phenotype mutual learning to enhance single-modal input based longitudinal outcome prediction (Q2170150) (← links)
- Some thoughts on knowledge-enhanced machine learning (Q2237522) (← links)
- Counterfactual state explanations for reinforcement learning agents via generative deep learning (Q2238641) (← links)
- Embedding deep networks into visual explanations (Q2238677) (← links)
- Kandinsky patterns (Q2238716) (← links)
- A framework for step-wise explaining how to solve constraint satisfaction problems (Q2238723) (← links)
- Efficient Estimation of the ANOVA Mean Dimension, with an Application to Neural Net Classification (Q4995121) (← links)
- Grouping of contracts in insurance using neural networks (Q5003353) (← links)
- Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set (Q5004356) (← links)
- Explainable Deep Learning: A Field Guide for the Uninitiated (Q5026262) (← links)
- A Survey on the Explainability of Supervised Machine Learning (Q5145841) (← links)
- Optimizing for Interpretability in Deep Neural Networks with Tree Regularization (Q5154764) (← links)