Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model

From MaRDI portal
Publication:902909

DOI10.1214/15-AOAS848zbMATH Open1454.62348arXiv1511.01644OpenAlexW2118022153WikidataQ56340208 ScholiaQ56340208MaRDI QIDQ902909FDOQ902909


Authors: Benjamin Letham, Cynthia Rudin, Tyler H. Mccormick, David Madigan Edit this on Wikidata


Publication date: 4 January 2016

Published in: The Annals of Applied Statistics (Search for Journal in Brave)

Abstract: We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if...then... statements (e.g., if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS2 score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS2, but more accurate.


Full work available at URL: https://arxiv.org/abs/1511.01644




Recommendations




Cites Work


Cited In (27)

Uses Software





This page was built for publication: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q902909)