Sub-optimality of some continuous shrinkage priors
From MaRDI portal
Publication:335657
Abstract: Two-component mixture priors provide a traditional way to induce sparsity in high-dimensional Bayes models. However, several aspects of such a prior, including computational complexities in high-dimensions, interpretation of exact zeros and non-sparse posterior summaries under standard loss functions, has motivated an amazing variety of continuous shrinkage priors, which can be expressed as global-local scale mixtures of Gaussians. Interestingly, we demonstrate that many commonly used shrinkage priors, including the Bayesian Lasso, do not have adequate posterior concentration in high-dimensional settings.
Recommendations
- Dirichlet-Laplace priors for optimal shrinkage
- Ultra high-dimensional multivariate posterior contraction rate under shrinkage priors
- Bayesian shrinkage towards sharp minimaxity
- Posterior consistency in linear models under shrinkage priors
- Conditions for posterior contraction in the sparse normal means problem
Cites Work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 3824228 (Why is no real title available?)
- scientific article; zbMATH DE number 409717 (Why is no real title available?)
- 10.1162/15324430152748236
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
- Bayesian linear regression with sparse priors
- Confidence sets in sparse regression
- Dirichlet-Laplace priors for optimal shrinkage
- Elastic net regression modeling with the orthant normal prior
- Generalized double Pareto shrinkage
- High-dimensional generalized linear models and the lasso
- Inference with normal-gamma prior distributions in regression problems
- Lasso-type recovery of sparse representations for high-dimensional data
- Least angle regression. (With discussion)
- Lower bounds for posterior rates with Gaussian process priors
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences
- On scale mixtures of normal distributions
- On the half-Cauchy prior for a global scale parameter
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- Prior distributions for variance parameters in hierarchical models (Comment on article by Browne and Draper)
- Statistics for high-dimensional data. Methods, theory and applications.
- The Bayesian Lasso
- The horseshoe estimator for sparse signals
- The horseshoe estimator: posterior concentration around nearly black vectors
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Three Multidimensional-integral Identities with Bayesian Applications
Cited In (11)
- Needles and straw in a haystack: robust confidence for possibly sparse sequences
- Dirichlet-Laplace priors for optimal shrinkage
- Compound Poisson processes, latent shrinkage priors and Bayesian nonconvex penalization
- A Mass-Shifting Phenomenon of Truncated Multivariate Normal Priors
- Default Bayesian analysis with global-local shrinkage priors
- Bayesian shrinkage towards sharp minimaxity
- Bayesian fusion estimation via \(t\) shrinkage
- Ultra high-dimensional multivariate posterior contraction rate under shrinkage priors
- Functional Horseshoe Priors for Subspace Shrinkage
- Geometric ergodicity of the Bayesian Lasso
- Rate-optimal posterior contraction for sparse PCA
Uses Software
This page was built for publication: Sub-optimality of some continuous shrinkage priors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q335657)