Learning variational autoencoders via MCMC speed measures
From MaRDI portal
Publication:6606966
DOI10.1007/S11222-024-10481-XzbMATH Open1545.6206MaRDI QIDQ6606966FDOQ6606966
Authors: Marcel Hirt, Vasileios Kreouzis, Petros Dellaportas
Publication date: 17 September 2024
Published in: Statistics and Computing (Search for Journal in Brave)
Recommendations
- Deep variational inference
- Improving latent variable descriptiveness by modelling rather than ad-hoc factors
- Variational Hamiltonian Monte Carlo via score matching
- \(\pi\) VAE: a stochastic process prior for Bayesian deep learning with MCMC
- Asymptotically exact inference in differentiable generative models
Computational methods in Markov chains (60J22) Computational methods for problems pertaining to statistics (62-08)
Cites Work
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- A general framework for the parametrization of hierarchical models
- Particle Markov Chain Monte Carlo Methods
- Primal-dual subgradient methods for convex problems
- Probabilistic Principal Component Analysis
- A Connection Between Score Matching and Denoising Autoencoders
- Title not available (Why is that?)
- Geometric numerical integration illustrated by the Störmer–Verlet method
- On the geometric ergodicity of Hamiltonian Monte Carlo
- Log-concave sampling: Metropolis-Hastings algorithms are fast
- Geometric integrators and the Hamiltonian Monte Carlo method
- Title not available (Why is that?)
- Stochastic normalizing flows for inverse problems: a Markov chains viewpoint
Cited In (1)
This page was built for publication: Learning variational autoencoders via MCMC speed measures
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6606966)