Dirichlet-Laplace priors for optimal shrinkage

From MaRDI portal
Publication:5367461

DOI10.1080/01621459.2014.960967zbMATH Open1373.62368arXiv1401.5398OpenAlexW2150149003WikidataQ36715879 ScholiaQ36715879MaRDI QIDQ5367461FDOQ5367461


Authors: Anirban Bhattacharya, Debdeep Pati, Natesh S. Pillai, David Dunson Edit this on Wikidata


Publication date: 13 October 2017

Published in: Journal of the American Statistical Association (Search for Journal in Brave)

Abstract: Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimensions. This has motivated an amazing variety of continuous shrinkage priors, which can be expressed as global-local scale mixtures of Gaussians, facilitating computation. In sharp contrast to the frequentist literature, little is known about the properties of such priors and the convergence and concentration of the corresponding posterior distribution. In this article, we propose a new class of Dirichlet--Laplace (DL) priors, which possess optimal posterior concentration and lead to efficient posterior computation exploiting results from normalized random measure theory. Finite sample performance of Dirichlet--Laplace priors relative to alternatives is assessed in simulated and real data examples.


Full work available at URL: https://arxiv.org/abs/1401.5398




Recommendations





Cited In (only showing first 100 items - show all)





This page was built for publication: Dirichlet-Laplace priors for optimal shrinkage

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5367461)