Sticky PDMP samplers for sparse and local inference problems

From MaRDI portal
Publication:2104011

DOI10.1007/S11222-022-10180-5zbMATH Open1499.62011arXiv2103.08478OpenAlexW4310153751MaRDI QIDQ2104011FDOQ2104011


Authors: Joris Bierkens, Sebastiano Grazzi, Moritz Schauer, Frank van der Meulen Edit this on Wikidata


Publication date: 9 December 2022

Published in: Statistics and Computing (Search for Journal in Brave)

Abstract: We construct a new class of efficient Monte Carlo methods based on continuous-time piecewise deterministic Markov processes (PDMPs) suitable for inference in high dimensional sparse models, i.e. models for which there is prior knowledge that many coordinates are likely to be exactly 0. This is achieved with the fairly simple idea of endowing existing PDMP samplers with 'sticky' coordinate axes, coordinate planes etc. Upon hitting those subspaces, an event is triggered during which the process sticks to the subspace, this way spending some time in a sub-model. This results in non-reversible jumps between different (sub-)models. While we show that PDMP samplers in general can be made sticky, we mainly focus on the Zig-Zag sampler. Compared to the Gibbs sampler for variable selection, we heuristically derive favourable dependence of the Sticky Zig-Zag sampler on dimension and data size. The computational efficiency of the Sticky Zig-Zag sampler is further established through numerical experiments where both the sample size and the dimension of the parameter space are large.


Full work available at URL: https://arxiv.org/abs/2103.08478




Recommendations




Cites Work


Cited In (2)

Uses Software





This page was built for publication: Sticky PDMP samplers for sparse and local inference problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2104011)