Conditions for posterior contraction in the sparse normal means problem
From MaRDI portal
Abstract: The first Bayesian results for the sparse normal means problem were proven for spike-and-slab priors. However, these priors are less convenient from a computational point of view. In the meanwhile, a large number of continuous shrinkage priors has been proposed. Many of these shrinkage priors can be written as a scale mixture of normals, which makes them particularly easy to implement. We propose general conditions on the prior on the local variance in scale mixtures of normals, such that posterior contraction at the minimax rate is assured. The conditions require tails at least as heavy as Laplace, but not too heavy, and a large amount of mass around zero relative to the tails, more so as the sparsity increases. These conditions give some general guidelines for choosing a shrinkage prior for estimation under a nearly black sparsity assumption. We verify these conditions for the class of priors considered by Ghosh and Chakrabarti (2015), which includes the horseshoe and the normal-exponential gamma priors, and for the horseshoe+, the inverse-Gaussian prior, the normal-gamma prior, and the spike-and-slab Lasso, and thus extend the number of shrinkage priors which are known to lead to posterior contraction at the minimax estimation rate.
Recommendations
Cites work
- scientific article; zbMATH DE number 3124366 (Why is no real title available?)
- scientific article; zbMATH DE number 409717 (Why is no real title available?)
- scientific article; zbMATH DE number 3442988 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- Asymptotic properties of Bayes risk for the horseshoe prior
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Bayesian estimation of sparse signals with a continuous spike-and-slab prior
- Bayesian linear regression with sparse priors
- Convergence rates of posterior distributions.
- Dirichlet-Laplace priors for optimal shrinkage
- Gibbs Sampling for Bayesian Non-Conjugate and Hierarchical Models by Using Auxiliary Variables
- Good, great, or lucky? Screening for firms with sustained superior performance using heavy-tailed priors
- Inference with normal-gamma prior distributions in regression problems
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences
- On adaptive posterior concentration rates
- On the half-Cauchy prior for a global scale parameter
- The Bayesian Lasso
- The horseshoe estimator for sparse signals
- The horseshoe estimator: posterior concentration around nearly black vectors
- The horseshoe+ estimator of ultra-sparse signals
Cited in
(32)- On the exponentially weighted aggregate with the Laplace prior
- The horseshoe-like regularization for feature subset selection
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Spike and slab empirical Bayes sparse credible sets
- Adaptive posterior contraction rates for the horseshoe
- A comparative study on high-dimensional bayesian regression with binary predictors
- Neuronized Priors for Bayesian Sparse Linear Regression
- A new Bayesian Lasso and ridge regression with a practically meaningful parameterization and a simple weakly informative prior
- Lasso meets horseshoe: a survey
- The horseshoe estimator: posterior concentration around nearly black vectors
- A global-local approach for detecting hotspots in multiple-response regression
- Bayesian effect selection in structured additive distributional regression models
- Horseshoe Regularisation for Machine Learning in Complex and Deep Models1
- Two-way sparsity for time-varying networks with applications in genomics
- Bayesian estimation of sparse signals with a continuous spike-and-slab prior
- On the beta prime prior for scale parameters in high-dimensional Bayesian regression models
- Confidence interval for normal means in meta-analysis based on a pretest estimator
- Prediction risk for the horseshoe regression
- Sub-optimality of some continuous shrinkage priors
- Contraction properties of shrinkage priors in logistic regression
- Empirical Bayes analysis of spike and slab posterior distributions
- Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors
- Bayesian shrinkage towards sharp minimaxity
- Effect of global shrinkage parameter of horseshoe prior in compressed sensing
- Large-scale multiple hypothesis testing with the normal-beta prime prior
- High-dimensional multivariate posterior consistency under global-local shrinkage priors
- Shrinkage with shrunken shoulders: Gibbs sampling shrinkage model posteriors with guaranteed convergence rates
- Global-local shrinkage priors for asymptotic point and interval estimation of normal means under sparsity
- Variance prior forms for high-dimensional Bayesian variable selection
- A Scalable Empirical Bayes Approach to Variable Selection in Generalized Linear Models
- Bayesian variance estimation in the Gaussian sequence model with partial information on the means
- Discussion to: Bayesian graphical models for modern biological applications by Y. Ni, V. Baladandayuthapani, M. Vannucci and F.C. Stingo
This page was built for publication: Conditions for posterior contraction in the sparse normal means problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q276234)