Defining replicability of prediction rules
From MaRDI portal
Publication:6145147
Abstract: In this article I propose an approach for defining replicability for prediction rules. Motivated by a recent NAS report, I start from the perspective that replicability is obtaining consistent results across studies suitable to address the same prediction question, each of which has obtained its own data. I then discuss concept and issues in defining key elements of this statement. I focus specifically on the meaning of "consistent results" in typical utilization contexts, and propose a multi-agent framework for defining replicability, in which agents are neither partners nor adversaries. I recover some of the prevalent practical approaches as special cases. I hope to provide guidance for a more systematic assessment of replicability in machine learning.
Cites work
- scientific article; zbMATH DE number 3616126 (Why is no real title available?)
- scientific article; zbMATH DE number 735230 (Why is no real title available?)
- scientific article; zbMATH DE number 1104922 (Why is no real title available?)
- scientific article; zbMATH DE number 3087284 (Why is no real title available?)
- Bayesian nonparametric cross-study validation of prediction methods
- Fairness through awareness
- Hierarchical resampling for bagging in multistudy prediction with applications to human neurochemical sensing
- Integration of survival data from multiple studies
- Modeling between-study heterogeneity for improved replicability in gene signature selection and clinical prediction
- Stacked regressions
- Statistical modeling: The two cultures. (With comments and a rejoinder).
- Thomas Bayes's Bayesian Inference
- Tracking cross-validated estimates of prediction error as studies accumulate
- Training replicable predictors in multiple studies
- Veridical data science
This page was built for publication: Defining replicability of prediction rules
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6145147)