Replication, statistical consistency, and publication bias
From MaRDI portal
Publication:2437263
Recommendations
- The consistency test does not -- and cannot -- deliver what is advertised: a comment on Francis (2013)
- On biases in assessing replicability, statistical consistency and publication bias
- Statistical methods for replicability assessment
- \(p_{\text{rep}}\): an agony in five fits
- Statistical proof? The problem of irreproducibility
Cites work
- scientific article; zbMATH DE number 4090552 (Why is no real title available?)
- scientific article; zbMATH DE number 48701 (Why is no real title available?)
- Fixed-Sample-Size Analysis of Sequential Observations
- Introduction to Meta‐Analysis
- Operating Characteristics of a Rank Correlation Test for Publication Bias
- Trim and Fill: A Simple Funnel‐Plot–Based Method of Testing and Adjusting for Publication Bias in Meta‐Analysis
Cited in
(24)- \textsc{Tutorial}: ``With sufficient increases in X, more people will engage in the target behavior
- Clarifications on the application and interpretation of the test for excess significance and its extensions
- It really just does not follow, comments on Francis (2013)
- Interrogating \(p\)-values
- The consistency test may be too weak to be useful: its systematic application would not improve effect size estimation in meta-analyses
- The consistency test does not -- and cannot -- deliver what is advertised: a comment on Francis (2013)
- On biases in assessing replicability, statistical consistency and publication bias
- An evaluation of statistical methods for aggregate patterns of replication failure
- We should focus on the biases that matter: a reply to commentaries
- Improving the conduct and reporting of statistical analysis in psychology. Response to the comments ``Thinking about data, research methods, and statistical analyses and ``Encourage playing with data and discourage questionable reporting practices provided in response to ``Playing with data -- or how to discourage questionable research practices and stimulate researchers to do things right
- \(p_{\text{rep}}\): an agony in five fits
- Thinking about data, research methods, and statistical analyses. Comment on ``Playing with data -- or how to discourage questionable research practices and stimulate researchers to do things right
- Method in experiment: rhetoric and reality
- Statistical methods for replicability assessment
- Scientific self-correction: the Bayesian way
- The falsificationist foundation for null hypothesis significance testing
- Distance from a distance: the robustness of psychological distance effects
- Playing with data -- or how to discourage questionable research practices and stimulate researchers to do things right
- What type of Type I error? Contrasting the Neyman-Pearson and Fisherian approaches in the context of exact and direct replications
- Variation and Covariation in Large-Scale Replication Projects: An Evaluation of Replicability
- How redefining statistical significance can worsen the replication crisis
- On the impossibility of empirical controls of scientific theories-from the point of view of a psychologist
- Replication study design: confidence intervals and commentary
- Statistical proof? The problem of irreproducibility
This page was built for publication: Replication, statistical consistency, and publication bias
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2437263)