Null Hypothesis Significance Testing Defended and Calibrated by Bayesian Model Checking
From MaRDI portal
Publication:5056972
Cites work
- scientific article; zbMATH DE number 3441432 (Why is no real title available?)
- scientific article; zbMATH DE number 2199136 (Why is no real title available?)
- A STATISTICAL PARADOX
- A comment on D. V. Lindley's statistical paradox
- A critical evaluation of the current “p‐value controversy”
- A general framework for model-based statistics
- Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox
- Calibration of \(\rho \) values for testing precise null hypotheses
- Confidence distributions and empirical Bayes posterior distributions unified as distributions of evidential support
- Correcting false discovery rates for their bias toward false positives
- Decision making under uncertainty using imprecise probabilities
- Error probabilities in default Bayesian hypothesis testing
- Inference after checking multiple Bayesian models for data conflict and applications to mitigating the influence of rejected priors
- Large-scale inference. Empirical Bayes methods for estimation, testing, and prediction
- Measuring statistical evidence using relative belief
- Post-Processing Posterior PredictivepValues
- Prior and posterior predictive p-values in the one-sided location parameter testing problem
- Reporting Bayes factors or probabilities to decision makers of unknown loss functions
- Revised standards for statistical evidence
- The ASA Statement on p-Values: Context, Process, and Purpose
- The False Positive Risk: A Proposal Concerning What to Do About p-Values
- The one-sided posterior predictive \(p\)-value for Fieller's problem
Cited in
(6)- Null Hypothesis Significance Testing Interpreted and Calibrated by Estimating Probabilities of Sign Errors: A Bayes-Frequentist Continuum
- Maximum entropy derived and generalized under idempotent probability to address Bayes-frequentist uncertainty and model revision uncertainty: an information-theoretic semantics for possibility theory
- Publication Policies for Replicable Research and the Community-Wide False Discovery Rate
- Statistical evidence and surprise unified under possibility theory
- The \(p\)-value interpreted as the posterior probability of explaining the data: applications to multiple testing and to restricted parameter spaces
- Fiducialize statistical significance: transformingp-values into conservative posterior probabilities and Bayes factors
This page was built for publication: Null Hypothesis Significance Testing Defended and Calibrated by Bayesian Model Checking
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5056972)