A comparison of learning rate selection methods in generalized Bayesian inference
From MaRDI portal
Publication:6122017
DOI10.1214/21-BA1302OpenAlexW3114174487MaRDI QIDQ6122017FDOQ6122017
Authors:
Publication date: 27 February 2024
Published in: Bayesian Analysis (Search for Journal in Brave)
Abstract: Generalized Bayes posterior distributions are formed by putting a fractional power on the likelihood before combining with the prior via Bayes's formula. This fractional power, which is often viewed as a remedy for potential model misspecification bias, is called the learning rate, and a number of data-driven learning rate selection methods have been proposed in the recent literature. Each of these proposals has a different focus, a different target they aim to achieve, which makes them difficult to compare. In this paper, we provide a direct head-to-head comparison of these learning rate selection methods in various misspecified model scenarios, in terms of several relevant metrics, in particular, coverage probability of the generalized Bayes credible regions. In some examples all the methods perform well, while in others the misspecification is too severe to be overcome, but we find that the so-called generalized posterior calibration algorithm tends to outperform the others in terms of credible region coverage probability.
Full work available at URL: https://arxiv.org/abs/2012.11349
model misspecificationcoverage probabilitygeneralized posterior calibration algorithmSafeBayes algorithm
Statistics (62-XX) Interacting random processes; statistical mechanics type models; percolation theory (60K35)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A General Framework for Updating Belief Distributions
- Assigning a value to a power likelihood in a general Bayesian model
- Asymptotic Statistics
- Asymptotic behavior of Bayes estimates under possibly incorrect models
- Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables
- Bayesian fractional posteriors
- Bayesian inference with misspecified models
- Calibrating general posterior credible regions
- Comment: ``Bayes, oracle Bayes and empirical Bayes
- Convergence rates of posterior distributions.
- Data tracking and the understanding of Bayesian consistency
- Data-driven priors and their posterior concentration rates
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Empirical priors and coverage of posterior credible sets in a sparse normal mean model
- Empirical priors for prediction in sparse high-dimensional linear regression
- False confidence, non-additive beliefs, and valid statistical inference
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
- Fundamentals of nonparametric Bayesian inference
- General Bayesian updating and the loss-likelihood bootstrap
- Gibbs posterior inference on multivariate quantiles
- Gibbs posterior inference on the minimum clinically important difference
- Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it
- Is Bayes posterior just quick and dirty confidence?
- Limiting Behavior of Posterior Distributions when the Model is Incorrect
- Minimum clinically important difference in medical studies
- Misspecification in infinite-dimensional Bayesian statistics
- Model-free posterior inference on the area under the receiver operating characteristic curve
- On Bayesian consistency
- On inconsistent Bayes estimates of location
- On posterior concentration in misspecified models
- On the consistency of Bayes estimates
- Posterior consistency of Dirichlet mixtures in density estimation
- Robust Bayesian inference via coarsening
- Robust and rate-optimal Gibbs posterior inference on the boundary of a noisy image
- Safe probability
- The Bernstein-von Mises theorem under misspecification
- The consistency of posterior distributions in nonparametric problems
- The safe Bayesian. Learning the learning rate via the mixability gap
Cited In (2)
This page was built for publication: A comparison of learning rate selection methods in generalized Bayesian inference
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6122017)