Learning variational autoencoders via MCMC speed measures
From MaRDI portal
Publication:6606966
Recommendations
- Deep variational inference
- Improving latent variable descriptiveness by modelling rather than ad-hoc factors
- Variational Hamiltonian Monte Carlo via score matching
- \(\pi\) VAE: a stochastic process prior for Bayesian deep learning with MCMC
- Asymptotically exact inference in differentiable generative models
Cites work
- scientific article; zbMATH DE number 6982332 (Why is no real title available?)
- scientific article; zbMATH DE number 2152346 (Why is no real title available?)
- A Connection Between Score Matching and Denoising Autoencoders
- A general framework for the parametrization of hierarchical models
- Geometric integrators and the Hamiltonian Monte Carlo method
- Geometric numerical integration illustrated by the Störmer–Verlet method
- Log-concave sampling: Metropolis-Hastings algorithms are fast
- On the geometric ergodicity of Hamiltonian Monte Carlo
- Particle Markov Chain Monte Carlo Methods
- Primal-dual subgradient methods for convex problems
- Probabilistic Principal Component Analysis
- Stochastic normalizing flows for inverse problems: a Markov chains viewpoint
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
- Weak convergence and optimal scaling of random walk Metropolis algorithms
This page was built for publication: Learning variational autoencoders via MCMC speed measures
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6606966)