scientific article; zbMATH DE number 7307488
From MaRDI portal
Publication:5149258
Authors: Frederik Heber, Zofia Trstanova, Benedict Leimkuhler
Publication date: 8 February 2021
Full work available at URL: https://jmlr.csail.mit.edu/papers/v21/19-339.html
Title of this publication is not available (Why is that?)
Recommendations
- Learning to optimize via posterior sampling
- Parametric Inference for Discretely Sampled Stochastic Differential Equations
- Variational Inference for Stochastic Differential Equations
- Posterior inference on parameters of stochastic differential equations via non-linear Gaussian filtering and adaptive MCMC
- Randomized maximum likelihood based posterior sampling
- Variational Monte Carlo -- bridging concepts of machine learning and high-dimensional partial differential equations
- Laplace based approximate posterior inference for differential equation models
- Discrete sample estimation for gaussian random fields generated by stochastic partial differential equations
- Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
Markov chain Monte Carloneural network trainingBayesian posterior samplingensemble sampling strategiessoftware platforms for machine learning
Cites Work
- Ensemble preconditioning for Markov chain Monte Carlo simulation
- Adaptive subgradient methods for online learning and stochastic optimization
- Numerical Optimization
- How to generate random matrices from the classical compact groups
- Ensemble samplers with affine invariance
- The concentration of measure phenomenon
- Free energy computations. A mathematical perspective
- The computation of averages from equilibrium and nonequilibrium Langevin molecular dynamics
- Long-run accuracy of variational integrators in the stochastic context
- Are Loss Functions All the Same?
- Rational construction of stochastic numerical methods for molecular sampling
- Machine learning in computer vision.
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Molecular dynamics. With deterministic and stochastic numerical methods
- Computational complexity of Metropolis-Hastings methods in high dimensions
- Optimization methods for large-scale machine learning
- Entropy-SGD: biasing gradient descent into wide valleys
- Flat Minima
- Statistics in the big data era: failures of the machine
- Exploration of the (non-)asymptotic bias and variance of stochastic gradient Langevin dynamics
- Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
- The simulated tempering method in the infinite switch limit with adaptive weight learning
- Local and global perspectives on diffusion maps in the analysis of molecular systems
Cited In (3)
Uses Software
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5149258)