Priors via imaginary training samples of sufficient statistics for objective Bayesian hypothesis testing
DOI10.1007/S40300-019-00159-0zbMATH Open1437.62091OpenAlexW2977231085WikidataQ127215044 ScholiaQ127215044MaRDI QIDQ2175373FDOQ2175373
Authors: D. Fouskakis
Publication date: 29 April 2020
Published in: Metron (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s40300-019-00159-0
Recommendations
- Integral priors and constrained imaginary training samples for nested and non-nested Bayesian model comparison
- Objective priors in the empirical Bayes framework
- Training samples in objective Bayesian model selection.
- Objective priors: an introduction for frequentists
- Objective Bayesian hypothesis testing in binomial regression models with integral prior distributions
- An objective Bayesian approach to multistage hypothesis testing
- Unbiased Bayes estimates and improper priors
- An objective Bayes factor with improper priors
- Prior-free inference for objective Bayesian analysis and model selection
Bayesian hypothesis testingexpected-posterior priorsobjective priorspower-expected-posterior priorssufficient statisticsimaginary training samples
Parametric hypothesis testing (62F03) Bayesian inference (62F15) Sufficient statistics and fields (62B05)
Cites Work
- Hierarchical shrinkage priors for regression models
- Penalising model component complexity: a principled, practical approach to constructing priors
- The Intrinsic Bayes Factor for Model Selection and Prediction
- A comment on D. V. Lindley's statistical paradox
- A Reference Bayesian Test for Nested Hypotheses and its Relationship to the Schwarz Criterion
- Prior distributions for objective Bayesian analysis
- The formal definition of reference priors
- Training samples in objective Bayesian model selection.
- Power-expected-posterior priors for variable selection in Gaussian linear models
- Title not available (Why is that?)
- Expected-posterior prior distributions for model selection
- Compatibility of prior specifications across linear models
- Bayesian Hypothesis Testing: A Reference Approach
- Limiting behavior of the Jeffreys power-expected-posterior Bayes factor in Gaussian linear models
- Accurate and stable Bayesian model selection: the median intrinsic Bayes factor.
- On the use of Non-Local Prior Densities in Bayesian Hypothesis Tests
- Title not available (Why is that?)
- Generalization of Jeffreys Divergence-Based Priors for Bayesian Hypothesis Testing
- Information consistency of the Jeffreys power-expected-posterior prior in Gaussian linear models
- Power-expected-posterior priors for generalized linear models
Cited In (3)
This page was built for publication: Priors via imaginary training samples of sufficient statistics for objective Bayesian hypothesis testing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2175373)