Bayesian nonparametric cross-study validation of prediction methods
From MaRDI portal
Abstract: We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-in cross-study validation: each of the algorithms is trained on one data set; the resulting model is then validated on each remaining data set. This poses two statistical challenges that need to be addressed simultaneously. The first is the assessment of study heterogeneity, with the aim of identifying a subset of studies within which algorithm comparisons can be reliably carried out. The second is the comparison of algorithms using the ensemble of data sets. We address both problems by integrating clustering and model comparison. We formulate a Bayesian model for the array of cross-study validation statistics, which defines clusters of studies with similar properties and provides the basis for meaningful algorithm comparison in the presence of study heterogeneity. We illustrate our approach through simulations involving studies with varying severity of systematic errors, and in the context of medical prognosis for patients diagnosed with cancer, using high-throughput measurements of the transcriptional activity of the tumor's genes.
Recommendations
- A Bayesian approach for comparing cross-validated algorithms on multiple data sets
- Statistical comparison of classifiers through Bayesian hierarchical modelling
- Efficient leave-one-out cross-validation for Bayesian non-factorized normal and Student-t models
- Tracking cross-validated estimates of prediction error as studies accumulate
- Bayesian Model Assessment and Comparison Using Cross-Validation Predictive Densities
Cites work
- scientific article; zbMATH DE number 5262704 (Why is no real title available?)
- scientific article; zbMATH DE number 3390151 (Why is no real title available?)
- A Bayesian Semiparametric Model for Random-Effects Meta-Analysis
- A Bayesian justification of Cox's partial likelihood
- Bayesian Clustering and Product Partition Models
- Bayesian nonparametric cross-study validation of prediction methods
- Bootstrap methods: another look at the jackknife
- Cross-study validation and combined analysis of gene expression microarray data
- Defining predictive probability functions for species sampling models
- Empirical Bayes estimation of a binomial parameter via mixtures of Dirichlet processes
- Evaluating learning algorithms. A classification perspective
- Maximum transfer distance between partitions
- Random survival forests
- Representations for partially exchangeable arrays of random variables
- Statistical comparisons of classifiers over multiple data sets
Cited in
(11)- Comparison of adult offense prediction methods based on juvenile offense trajectories using cross-validation
- A Bayesian approach for comparing cross-validated algorithms on multiple data sets
- Prediction scoring of data-driven discoveries for reproducible research
- Defining replicability of prediction rules
- The leave-worst-\(k\)-out criterion for cross validation
- Tracking cross-validated estimates of prediction error as studies accumulate
- Cross-study replicability in cluster analysis
- Prediction of hereditary cancers using neural networks
- Bayesian nonparametric cross-study validation of prediction methods
- Integration of survival data from multiple studies
- scientific article; zbMATH DE number 4155661 (Why is no real title available?)
This page was built for publication: Bayesian nonparametric cross-study validation of prediction methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2349584)