Convergence properties of Gibbs samplers for Bayesian probit regression with proper priors
From MaRDI portal
(Redirected from Publication:510201)
Abstract: The Bayesian probit regression model (Albert and Chib (1993)) is popular and widely used for binary regression. While the improper flat prior for the regression coefficients is an appropriate choice in the absence of any prior information, a proper normal prior is desirable when prior information is available or in modern high dimensional settings where the number of coefficients () is greater than the sample size (). For both choices of priors, the resulting posterior density is intractable and a Data Dugmentation (DA) Markov chain is used to generate approximate samples from the posterior distribution. Establishing geometric ergodicity for this DA Markov chain is important as it provides theoretical guarantees for constructing standard errors for Markov chain based estimates of posterior quantities. In this paper, we first show that in case of proper normal priors, the DA Markov chain is geometrically ergodic *for all* choices of the design matrix , and (unlike the improper prior case, where and another condition on are required for posterior propriety itself). We also derive sufficient conditions under which the DA Markov chain is trace-class, i.e., the eigenvalues of the corresponding operator are summable. In particular, this allows us to conclude that the Haar PX-DA sandwich algorithm (obtained by inserting an inexpensive extra step in between the two steps of the DA algorithm) is strictly better than the DA algorithm in an appropriate sense.
Recommendations
- Convergence analysis of the block Gibbs sampler for Bayesian probit linear mixed models with improper priors
- Geometric ergodicity of Pólya-Gamma Gibbs sampler for Bayesian logistic regression with a flat prior
- A Reflection Principle As a Reverse-mathematical Fixed Point over the Base Theory ZFC
- Convergence complexity analysis of Albert and Chib's algorithm for Bayesian probit regression
- Convergence analysis of MCMC algorithms for Bayesian multivariate linear regression with non-Gaussian errors
Cited in
(14)- Estimating accuracy of the MCMC variance estimator: asymptotic normality for batch means estimators
- Geometric ergodicity of Pólya-Gamma Gibbs sampler for Bayesian logistic regression with a flat prior
- Efficient shape-constrained inference for the autocovariance sequence from a reversible Markov chain
- Block Gibbs samplers for logistic mixed models: convergence properties and a comparison with full Gibbs samplers
- Uncertainty Quantification for Modern High-Dimensional Regression via Scalable Bayesian Methods
- A new go-to sampler for Bayesian probit regression
- Convergence complexity analysis of Albert and Chib's algorithm for Bayesian probit regression
- Convergence analysis of the block Gibbs sampler for Bayesian probit linear mixed models with improper priors
- Convergence properties of data augmentation algorithms for high-dimensional robit regression
- Wasserstein-based methods for convergence complexity analysis of MCMC with applications
- Analysis of the Pólya-gamma block Gibbs sampler for Bayesian logistic linear mixed models
- The Modified-Half-Normal distribution: Properties and an efficient sampling scheme
- Estimating the spectral gap of a trace-class Markov operator
- Consistent estimation of the spectrum of trace class data augmentation algorithms
This page was built for publication: Convergence properties of Gibbs samplers for Bayesian probit regression with proper priors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q510201)