Power-expected-posterior priors for variable selection in Gaussian linear models
From MaRDI portal
Abstract: In the context of the expected-posterior prior (EPP) approach to Bayesian variable selection in linear models, we combine ideas from power-prior and unit-information-prior methodologies to simultaneously produce a minimally-informative prior and diminish the effect of training samples. The result is that in practice our power-expected-posterior (PEP) methodology is sufficiently insensitive to the size n* of the training sample, due to PEP's unit-information construction, that one may take n* equal to the full-data sample size n and dispense with training samples altogether. In this paper we focus on Gaussian linear models and develop our method under two different baseline prior choices: the independence Jeffreys (or reference) prior, yielding the J-PEP posterior, and the Zellner g-prior, leading to Z-PEP. We find that, under the reference baseline prior, the asymptotics of PEP Bayes factors are equivalent to those of Schwartz's BIC criterion, ensuring consistency of the PEP approach to model selection. We compare the performance of our method, in simulation studies and a real example involving prediction of air-pollutant concentrations from meteorological covariates, with that of a variety of previously-defined variants on Bayes factors for objective variable selection. Our prior, due to its unit-information structure, leads to a variable-selection procedure that (1) is systematically more parsimonious than the basic EPP with minimal training sample, while sacrificing no desirable performance characteristics to achieve this parsimony; (2) is robust to the size of the training sample, thus enjoying the advantages described above arising from the avoidance of training samples altogether; and (3) identifies maximum-a-posteriori models that achieve good out-of-sample predictive performance.
Recommendations
- Information consistency of the Jeffreys power-expected-posterior prior in Gaussian linear models
- Power-expected-posterior priors for generalized linear models
- Variations of power-expected-posterior priors in normal regression models
- Limiting behavior of the Jeffreys power-expected-posterior Bayes factor in Gaussian linear models
- Bayesian variable selection using an adaptive powered correlation prior
Cites work
- scientific article; zbMATH DE number 3791441 (Why is no real title available?)
- scientific article; zbMATH DE number 469327 (Why is no real title available?)
- scientific article; zbMATH DE number 2091799 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Reference Bayesian Test for Nested Hypotheses and its Relationship to the Schwarz Criterion
- Adaptive sampling for Bayesian variable selection
- Bayesian and Non-Bayesian Analysis of the Regression Model with Multivariate Student-t Error Terms
- Bayesian model selection in high-dimensional settings
- Comparison of Bayesian objective procedures for variable selection in linear regression
- Compatibility of prior specifications across linear models
- Computation for intrinsic variable selection in normal regression models via expected-posterior prior
- Consistency of Bayesian procedures for variable selection
- Estimating Optimal Transformations for Multiple Regression and Correlation
- Estimating the dimension of a model
- Expected-posterior prior distributions for model selection
- Inference from intrinsic Bayes' procedures under model selection and uncertainty
- Mixtures of g Priors for Bayesian Variable Selection
- Objective Bayesian Variable Selection
- Objective Testing Procedures in Linear Models: Calibration of the p‐values
- Optimal predictive model selection.
- Statistical challenges of high-dimensional data
- The Intrinsic Bayes Factor for Model Selection and Prediction
- Training samples in objective Bayesian model selection.
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(18)- Power-Expected-Posterior Methodology with Baseline Shrinkage Priors
- Objective methods for graphical structural learning
- Power-expected-posterior prior Bayes factor consistency for nested linear models with increasing dimensions
- Prior distributions for objective Bayesian analysis
- Variations of power-expected-posterior priors in normal regression models
- Learning Markov equivalence classes of directed acyclic graphs: an objective Bayes approach
- Power-expected-posterior priors for generalized linear models
- Priors via imaginary training samples of sufficient statistics for objective Bayesian hypothesis testing
- Limiting behavior of the Jeffreys power-expected-posterior Bayes factor in Gaussian linear models
- On the correspondence from Bayesian log-linear modelling to logistic regression modelling with g-priors
- On the safe use of prior densities for Bayesian model selection
- An interview with Luis Raúl Pericchi
- Information consistency of the Jeffreys power-expected-posterior prior in Gaussian linear models
- Objective Bayesian model choice for non-nested families: the case of the Poisson and the negative binomial
- Shrinkage priors via random imaginary data
- A comparison of power-expected-posterior priors in shrinkage regression
- Catalytic prior distributions with application to generalized linear models
- Power-expected-posterior priors as mixtures of \(g\)-priors in normal linear models
This page was built for publication: Power-expected-posterior priors for variable selection in Gaussian linear models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q273575)