Definitions, methods, and applications in interpretable machine learning

From MaRDI portal
Publication:5218493

DOI10.1073/pnas.1900654116zbMath1431.62266arXiv1901.04592OpenAlexW2910705748WikidataQ90753735 ScholiaQ90753735MaRDI QIDQ5218493

No author found.

Publication date: 4 March 2020

Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1901.04592




Related Items (30)

What are the Most Important Statistical Ideas of the Past 50 Years?Structure-preserving neural networksPhysically interpretable machine learning algorithm on multidimensional non-linear fieldsEfficient Learning of Interpretable Classification RulesScrutinizing XAI using linear ground-truth data with suppressor variablesHigh-resolution Bayesian mapping of landslide hazard with unobserved trigger eventGlobal sensitivity analysis in epidemiological modelingData‐driven research in retail operations—A reviewA General Framework for Inference on Algorithm-Agnostic Variable ImportanceFlexible tree-structured regression models for discrete event timesStable Discovery of Interpretable Subgroups via Calibration in Causal StudiesDiscovering interpretable elastoplasticity models via the neural polynomial method enabled symbolic regressionsVisualizing the Implicit Model Selection TradeoffHow to find a good explanation for clustering?A reluctant additive model framework for interpretable nonlinear individualized treatment rulesAssessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-makingA machine learning approach to differentiate between COVID-19 and influenza infection using synthetic infection and immune response dataDescriptive accuracy in explanations: the case of probabilistic classifiersCross-study replicability in cluster analysisExplaining classifiers with measures of statistical associationUnderstanding the effect of contextual factors and decision making on team performance in Twenty20 cricket: an interpretable machine learning approachInterpreting machine-learning models in transformed feature space with an application to remote-sensing classificationPrediction, Estimation, and AttributionSIRUS: stable and interpretable RUle set for classificationA Survey on the Explainability of Supervised Machine LearningMaking sense of raw inputA robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov boundsDefault risk prediction and feature extraction using a penalized deep neural networkVeridical data sciencePrediction, Estimation, and Attribution




This page was built for publication: Definitions, methods, and applications in interpretable machine learning