A horseshoe mixture model for Bayesian screening with an application to light sheet fluorescence microscopy in brain imaging
From MaRDI portal
Publication:6138589
DOI10.1214/23-AOAS1736arXiv2106.08281OpenAlexW4386515913MaRDI QIDQ6138589FDOQ6138589
Authors: Ricardo B. R. Azevedo, Chelsie Lo, Damian G. Wheeler, Sunil P. Gandhi, Michele Guindani, Babak Shahbaba, Francesco D'Angelo
Publication date: 16 January 2024
Published in: The Annals of Applied Statistics (Search for Journal in Brave)
Abstract: In this paper, we focus on identifying differentially activated brain regions using a light sheet fluorescence microscopy - a recently developed technique for whole-brain imaging. Most existing statistical methods solve this problem by partitioning the brain regions into two classes: significantly and non-significantly activated. However, for the brain imaging problem at the center of our study, such binary grouping may provide overly simplistic discoveries by filtering out weak but important signals, that are typically adulterated by the noise present in the data. To overcome this limitation, we introduce a new Bayesian approach that allows classifying the brain regions into several tiers with varying degrees of relevance. Our approach is based on a combination of shrinkage priors - widely used in regression and multiple hypothesis testing problems - and mixture models - commonly used in model-based clustering. In contrast to the existing regularizing prior distributions, which use either the spike-and-slab prior or continuous scale mixtures, our class of priors is based on a discrete mixture of continuous scale mixtures and devises a cluster-shrinkage version of the Horseshoe prior. As a result, our approach provides a more general setting for Bayesian sparse estimation, drastically reduces the number of shrinkage parameters needed, and creates a framework for sharing information across units of interest. We show that this approach leads to more biologically meaningful and interpretable results in our brain imaging problem, since it allows the discrimination between active and inactive regions, while at the same time ranking the discoveries into clusters representing tiers of similar importance.
Full work available at URL: https://arxiv.org/abs/2106.08281
Cites Work
- Asymptotic behaviour of the posterior distribution in overfitted mixture models
- Inference with normal-gamma prior distributions in regression problems
- The horseshoe estimator for sparse signals
- The Bayesian Lasso
- Title not available (Why is that?)
- Model-based clustering based on sparse finite Gaussian mixtures
- Bayesian Variable Selection in Linear Regression
- Title not available (Why is that?)
- Dirichlet-Laplace priors for optimal shrinkage
- Robust graphical modeling of gene networks using classical and alternative \(t\)-distributions
- Sparsity information and regularization in the horseshoe and other shrinkage priors
- Large-Scale Simultaneous Hypothesis Testing
- Spike and slab variable selection: frequentist and Bayesian strategies
- Size, power and false discovery rates
- The spike-and-slab LASSO
- Title not available (Why is that?)
- Scalable approximate MCMC algorithms for the horseshoe prior
- Robust Bayesian graphical modeling using Dirichlet \(t\)-distributions
- Mean field variational Bayes for continuous sparse signal shrinkage: pitfalls and remedies
- The Bayesian Bridge
- Moving to a World Beyond “p < 0.05”
- Abandon Statistical Significance
- Uncertainty quantification for the horseshoe (with discussion)
- Decoupling shrinkage and selection in Bayesian linear models: a posterior summary perspective
- The horseshoe-like regularization for feature subset selection
- Lasso meets horseshoe: a survey
This page was built for publication: A horseshoe mixture model for Bayesian screening with an application to light sheet fluorescence microscopy in brain imaging
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6138589)