Leave Pima Indians alone: binary regression as a benchmark for Bayesian computation
From MaRDI portal
Abstract: Abstract. Whenever a new approach to perform Bayesian computation is introduced, a common practice is to showcase this approach on a binary regression model and datasets of moderate size. This paper discusses to which extent this practice is sound. It also reviews the current state of the art of Bayesian computation, using binary regression as a running example. Both sampling-based algorithms (importance sampling, MCMC and SMC) and fast approximations (Laplace and EP) are covered. Extensive numerical results are provided, some of which might go against conventional wisdom regarding the effectiveness of certain algorithms. Implications for other problems (variable selection) and other models are also discussed.
Recommendations
Cites work
- scientific article; zbMATH DE number 6377992 (Why is no real title available?)
- scientific article; zbMATH DE number 6378011 (Why is no real title available?)
- scientific article; zbMATH DE number 5544465 (Why is no real title available?)
- scientific article; zbMATH DE number 3567782 (Why is no real title available?)
- scientific article; zbMATH DE number 2117879 (Why is no real title available?)
- scientific article; zbMATH DE number 6781368 (Why is no real title available?)
- A General Framework for Updating Belief Distributions
- A sequential particle filter method for static models
- A weakly informative default prior distribution for logistic and other regression models
- Accurate Approximations for Posterior Moments and Marginal Densities
- Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations (with discussion)
- Approximations for binary Gaussian process classification
- Bayesian Analysis of Binary and Polychotomous Response Data
- Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables
- Bayesian auxiliary variable models for binary and multinomial regression
- Bias reduction of maximum likelihood estimates
- Computation of Gaussian orthant probabilities in high dimension
- Expectation Propagation in the Large Data Limit
- Fast simulation of truncated Gaussian distributions
- Fully Exponential Laplace Approximations to Expectations and Variances of Nonpositive Functions
- Graphical models, exponential families, and variational inference
- Inference for Lévy-driven stochastic volatility models via adaptive sequential Monte Carlo
- Leave Pima Indians alone: binary regression as a benchmark for Bayesian computation
- MCMC using Hamiltonian dynamics
- Mean-field variational approximate Bayesian inference for latent variable models
- Monte Carlo and quasi-Monte Carlo sampling
- Nested sampling for general Bayesian computation
- Numerical recipes. The art of scientific computing.
- On the properties of variational approximations of Gibbs posteriors
- Optimal scaling for various Metropolis-Hastings algorithms.
- Optimal tuning of the hybrid Monte Carlo algorithm
- Pattern recognition and machine learning.
- Reflection implies the SCH
- Riemann manifold Langevin and Hamiltonian Monte Carlo methods. With discussion and authors' reply
- Sequential Imputations and Bayesian Missing Data Problems
- Sequential Monte Carlo Samplers
- Simulation-based regularized logistic regression
- Split Hamiltonian Monte Carlo
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Stochastic model specification search for Gaussian and partial non-Gaussian state space models
- The Variational Gaussian Approximation Revisited
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
- The pseudo-marginal approach for efficient Monte Carlo computations
- Variational Bayesian Inference for Parametric and Nonparametric Regression With Missing Data
- \(\mathrm{SMC}^2\): an efficient algorithm for sequential analysis of state space models
Cited in
(24)- A closed-form filter for binary time series
- On a Metropolis-Hastings importance sampling estimator
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Hyper nonlocal priors for variable selection in generalized linear models
- Bounding Wasserstein Distance with Couplings
- Efficient importance sampling in low dimensions using affine arithmetic
- Adaptive tuning of Hamiltonian Monte Carlo within sequential Monte Carlo
- Global Consensus Monte Carlo
- Non-reversible guided Metropolis kernel
- Expectation propagation for the smoothing distribution in dynamic probit
- Speeding up the zig-zag process
- Bayesian inference in the presence of intractable normalizing functions
- Connecting the Dots: Numerical Randomized Hamiltonian Monte Carlo with State-Dependent Event Rates
- Leave Pima Indians alone: binary regression as a benchmark for Bayesian computation
- scientific article; zbMATH DE number 7625183 (Why is no real title available?)
- Bayesian Conjugacy in Probit, Tobit, Multinomial Probit and Extensions: A Review and New Results
- On the use of Cauchy prior distributions for Bayesian logistic regression
- Concentration of tempered posteriors and of their variational approximations
- Ultimate Pólya Gamma Samplers–Efficient MCMC for Possibly Imbalanced Binary and Categorical Data
- Tuning diagonal scale matrices for HMC
- A fresh Take on ‘Barker Dynamics’ for MCMC
- Higher-Order Monte Carlo through Cubic Stratification
- GPU-accelerated Gibbs sampling: a case study of the horseshoe probit model
- Scalable Computation of Predictive Probabilities in Probit Models with Gaussian Process Priors
This page was built for publication: Leave Pima Indians alone: binary regression as a benchmark for Bayesian computation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1790387)