Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model
From MaRDI portal
(Redirected from Publication:902909)
Abstract: We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if...then... statements (e.g., if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS, but more accurate.
Recommendations
- A Bayesian framework for learning rule sets for interpretable classification
- Bayesian Model Averaging in Proportional Hazard Models: Assessing the Risk of a Stroke
- Fractal and multifractional-based predictive optimization model for stroke subtypes' classification
- Bayesian hierarchical rule modeling for predicting medical conditions
Cites work
- scientific article; zbMATH DE number 3860199 (Why is no real title available?)
- scientific article; zbMATH DE number 1301669 (Why is no real title available?)
- scientific article; zbMATH DE number 2030008 (Why is no real title available?)
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- scientific article; zbMATH DE number 3240929 (Why is no real title available?)
- A Bayesian CART algorithm
- BART: Bayesian additive regression trees
- Bagging predictors
- Bayesian Inference on Network Traffic Using Link Count Data
- Bayesian hierarchical rule modeling for predicting medical conditions
- Bayesian treed models
- Dynamic trees for learning and design
- Efficient sequential decision-making algorithms for container inspection operations
- Heuristics of instability and stabilization in model selection
- Inductive Logic Programming: Theory and methods
- Inference from iterative simulation using multiple sequences
- Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model
- LIBLINEAR: a library for large linear classification
- Learning theory analysis for association rules and sequential event prediction
- Learning with decision lists of data-dependent features
- Node harvest
- Predictive learning via rule ensembles
- Random forests
- Statistical modeling: The two cultures. (With comments and a rejoinder).
- To explain or to predict?
- Using decision lists to construct interpretable and parsimonious treatment regimes
- Very simple classification rules perform well on most commonly used datasets
Cited in
(27)- Efficient Learning of Interpretable Classification Rules
- bsnsing: A Decision Tree Induction Method Based on Recursive Optimal Boolean Rule Composition
- Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
- Computing human-understandable strategies: deducing fundamental rules of poker strategy
- Building more accurate decision trees with the additive tree
- Learning certifiably optimal rule lists for categorical data
- A Bayesian framework for learning rule sets for interpretable classification
- Lexicographic preferences for predictive modeling of human decision making: a new machine learning method with an application in accounting
- Causal Rule Sets for Identifying Subgroups with Enhanced Treatment Effects
- A survey on the explainability of supervised machine learning
- Learning optimized risk scores
- Ultra-strong machine learning: comprehensibility of programs learned with ILP
- Interpretable machine learning: fundamental principles and 10 grand challenges
- False discovery rate control for effect modification in observational studies
- Learning customized and optimized lists of rules with mathematical programming
- Classification rules in relaxed logical form
- Maximizing interpretability and cost-effectiveness of surgical site infection (SSI) predictive models using feature-specific regularized logistic regression on preoperative temporal data
- Logic explained networks
- Rationalizing predictions by adversarial information calibration
- Considerations when learning additive explanations for black-box models
- Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model
- SIRUS: stable and interpretable RUle set for classification
- Robust subgroup discovery. Discovering subgroup lists using MDL
- Efficient learning of large sets of locally optimal classification rules
- A decision-theoretic approach for model interpretability in Bayesian framework
- Bayesian hierarchical rule modeling for predicting medical conditions
- Bayesian quickest detection of credit card fraud
This page was built for publication: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q902909)