Sub-optimality of some continuous shrinkage priors
From MaRDI portal
Publication:335657
DOI10.1016/J.SPA.2016.08.007zbMATH Open1419.62050arXiv1605.05671OpenAlexW2963623196MaRDI QIDQ335657FDOQ335657
David Dunson, Natesh S. Pillai, Anirban Bhattacharya, Debdeep Pati
Publication date: 2 November 2016
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Abstract: Two-component mixture priors provide a traditional way to induce sparsity in high-dimensional Bayes models. However, several aspects of such a prior, including computational complexities in high-dimensions, interpretation of exact zeros and non-sparse posterior summaries under standard loss functions, has motivated an amazing variety of continuous shrinkage priors, which can be expressed as global-local scale mixtures of Gaussians. Interestingly, we demonstrate that many commonly used shrinkage priors, including the Bayesian Lasso, do not have adequate posterior concentration in high-dimensional settings.
Full work available at URL: https://arxiv.org/abs/1605.05671
Recommendations
- Dirichlet-Laplace priors for optimal shrinkage
- Ultra high-dimensional multivariate posterior contraction rate under shrinkage priors
- Bayesian shrinkage towards sharp minimaxity
- Posterior consistency in linear models under shrinkage priors
- Conditions for posterior contraction in the sparse normal means problem
LassoBayesianhigh dimensionalregularizationpenalized regressionconvergence ratelower bound\(\ell_1\)shrinkage priorsub-optimal
Cites Work
- Elastic Net Regression Modeling With the Orthant Normal Prior
- Least angle regression. (With discussion)
- Statistics for high-dimensional data. Methods, theory and applications.
- Lasso-type recovery of sparse representations for high-dimensional data
- High-dimensional generalized linear models and the lasso
- Inference with normal-gamma prior distributions in regression problems
- Title not available (Why is that?)
- The horseshoe estimator for sparse signals
- The Bayesian Lasso
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences
- Confidence sets in sparse regression
- On the half-Cauchy prior for a global scale parameter
- 10.1162/15324430152748236
- Dirichlet–Laplace Priors for Optimal Shrinkage
- Prior distributions for variance parameters in hierarchical models (Comment on article by Browne and Draper)
- Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- Title not available (Why is that?)
- The horseshoe estimator: posterior concentration around nearly black vectors
- Lower bounds for posterior rates with Gaussian process priors
- Title not available (Why is that?)
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Generalized double Pareto shrinkage
- Bayesian linear regression with sparse priors
- On scale mixtures of normal distributions
- Three Multidimensional-integral Identities with Bayesian Applications
Cited In (6)
- Needles and straw in a haystack: robust confidence for possibly sparse sequences
- Compound Poisson processes, latent shrinkage priors and Bayesian nonconvex penalization
- A Mass-Shifting Phenomenon of Truncated Multivariate Normal Priors
- Functional Horseshoe Priors for Subspace Shrinkage
- Geometric ergodicity of the Bayesian Lasso
- Rate-optimal posterior contraction for sparse PCA
Uses Software
This page was built for publication: Sub-optimality of some continuous shrinkage priors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q335657)