Prediction scoring of data-driven discoveries for reproducible research
From MaRDI portal
Publication:2104015
Abstract: Predictive modeling uncovers knowledge and insights regarding a hypothesized data generating mechanism (DGM). Results from different studies on a complex DGM, derived from different data sets, and using complicated models and algorithms, are hard to quantitatively compare due to random noise and statistical uncertainty in model results. This has been one of the main contributors to the replication crisis in the behavioral sciences. The contribution of this paper is to apply prediction scoring to the problem of comparing two studies, such as can arise when evaluating replications or competing evidence. We examine the role of predictive models in quantitatively assessing agreement between two datasets that are assumed to come from two distinct DGMs. We formalize a distance between the DGMs that is estimated using cross validation. We argue that the resulting prediction scores depend on the predictive models created by cross validation. In this sense, the prediction scores measure the distance between DGMs, along the dimension of the particular predictive model. Using human behavior data from experimental economics, we demonstrate that prediction scores can be used to evaluate preregistered hypotheses and provide insights comparing data from different populations and settings. We examine the asymptotic behavior of the prediction scores using simulated experimental data and demonstrate that leveraging competing predictive models can reveal important differences between underlying DGMs. Our proposed cross-validated prediction scores are capable of quantifying differences between unobserved data generating mechanisms and allow for the validation and assessment of results from complex models.
Recommendations
- Information confidence scores for prediction models
- Bayesian nonparametric cross-study validation of prediction methods
- Training replicable predictors in multiple studies
- The lack of cross-validation can lead to inflated results and spurious conclusions: a re-analysis of the MacArthur violence risk assessment study
- Statistical inference for measures of predictive success
Cites work
- scientific article; zbMATH DE number 5851866 (Why is no real title available?)
- scientific article; zbMATH DE number 735230 (Why is no real title available?)
- A survey of Bayesian predictive methods for model assessment, selection and comparison
- Approximating cross-validatory predictive evaluation in Bayesian latent variable models with integrated IS and WAIC
- Conditional vs marginal estimation of the predictive loss of hierarchical models using WAIC and cross-validation
- Consistent cross-validatory model-selection for dependent data: hv-block cross-validation
- Data analysis, computation and mathematics
- Difficulty of selecting among multilevel models using predictive accuracy
- Measuring and testing dependence by correlation of distances
- ON A METHOD OF DETERMINING WHETHER A SAMPLE OF SIZE n SUPPOSED TO HAVE BEEN DRAWN FROM A PARENT POPULATION HAVING A KNOWN PROBABILITY INTEGRAL HAS PROBABLY BEEN DRAWN AT RANDOM
- Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC
- Predictive Inference and Scientific Reproducibility
- Probabilistic Forecasts, Calibration and Sharpness
- Remarks on a Multivariate Transformation
- Strictly Proper Scoring Rules, Prediction, and Estimation
- The ASA Statement on p-Values: Context, Process, and Purpose
- The elements of statistical learning. Data mining, inference, and prediction
- Understanding predictive information criteria for Bayesian models
This page was built for publication: Prediction scoring of data-driven discoveries for reproducible research
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2104015)