High-dimensional Bayesian inference via the unadjusted Langevin algorithm

From MaRDI portal
Publication:2325343

DOI10.3150/18-BEJ1073zbMath1428.62111arXiv1605.01559OpenAlexW2562674776MaRDI QIDQ2325343

Eric Moulines, Alain Durmus

Publication date: 25 September 2019

Published in: Bernoulli (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1605.01559



Related Items

Statistical Finite Elements via Langevin Dynamics, Approximation to stochastic variance reduced gradient Langevin dynamics by stochastic delay differential equations, Central limit theorem and self-normalized Cramér-type moderate deviation for Euler-Maruyama scheme, Improved bounds for discretization of Langevin diffusions: near-optimal rates without convexity, Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference, Stochastic gradient Hamiltonian Monte Carlo for non-convex learning, Quantitative contraction rates for Markov chains on general state spaces, Complexity of zigzag sampling algorithm for strongly log-concave distributions, Unnamed Item, Error estimates of the backward Euler-Maruyama method for multi-valued stochastic differential equations, Nonasymptotic bounds for sampling algorithms without log-concavity, On sampling from a log-concave density using kinetic Langevin diffusions, On Irreversible Metropolis Sampling Related to Langevin Dynamics, ALMOND: Adaptive Latent Modeling and Optimization via Neural Networks and Langevin Diffusion, Functional inequalities for perturbed measures with applications to log-concave measures and to some Bayesian problems, Global Optimization via Schrödinger–Föllmer Diffusion, Dimension Free Nonasymptotic Bounds on the Accuracy of High-Dimensional Laplace Approximation, Convergence of Langevin-simulated annealing algorithms with multiplicative noise. II: Total variation, Optimising portfolio diversification and dimensionality, Gradient-based adaptive importance samplers, A reduced-rank approach to predicting multiple binary responses through machine learning, Smoothing unadjusted Langevin algorithms for nonsmooth composite potential functions, Convergence of Position-Dependent MALA with Application to Conditional Simulation in GLMMs, Unadjusted Langevin algorithm with multiplicative noise: total variation and Wasserstein bounds, On the Generalized Langevin Equation for Simulated Annealing, Gradient-Based Markov Chain Monte Carlo for Bayesian Inference With Non-differentiable Priors, Nonasymptotic estimates for stochastic gradient Langevin dynamics under local conditions in nonconvex optimization, Bayesian Inverse Problems Are Usually Well-Posed, Laplace priors and spatial inhomogeneity in Bayesian inverse problems, Unnamed Item, (Non)-penalized multilevel methods for non-uniformly log-concave distributions, Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo, Distributed event-triggered unadjusted Langevin algorithm for Bayesian learning, The Langevin Monte Carlo algorithm in the non-smooth log-concave case, The Split Gibbs Sampler Revisited: Improvements to Its Algorithmic Structure and Augmented Target Distribution, Self-Supervised Deep Learning for Image Reconstruction: A Langevin Monte Carlo Approach, Variance reduction for Markov chains with application to MCMC, Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms, The forward-backward envelope for sampling with the overdamped Langevin algorithm, Bayesian Stochastic Gradient Descent for Stochastic Optimization with Streaming Input Data, Exact convergence analysis for metropolis–hastings independence samplers in Wasserstein distances, Ensemble preconditioning for Markov chain Monte Carlo simulation, On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case, Maximum Likelihood Estimation of Regularization Parameters in High-Dimensional Inverse Problems: An Empirical Bayesian Approach. Part II: Theoretical Analysis, High-dimensional MCMC with a standard splitting scheme for the underdamped Langevin diffusion, On the limitations of single-step drift and minorization in Markov chain convergence analysis, Normalizing constants of log-concave densities, Unnamed Item, Unnamed Item, Unnamed Item, On Stochastic Gradient Langevin Dynamics with Dependent Data Streams: The Fully Nonconvex Case, User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient, Non-asymptotic guarantees for sampling by stochastic gradient descent, Is there an analog of Nesterov acceleration for gradient-based MCMC?, Multi-level Monte Carlo methods for the approximation of invariant measures of stochastic differential equations, Unnamed Item, Multivariate approximations in Wasserstein distance by Stein's method and Bismut's formula, Convergence complexity analysis of Albert and Chib's algorithm for Bayesian probit regression, High-dimensional Bayesian inference via the unadjusted Langevin algorithm, Variance Reduction for Dependent Sequences with Applications to Stochastic Gradient MCMC, Higher order Langevin Monte Carlo algorithm, Geometry-informed irreversible perturbations for accelerated convergence of Langevin dynamics, Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials, Adaptive invariant density estimation for continuous-time mixing Markov processes under sup-norm risk, Unnamed Item, Control variates for stochastic gradient MCMC, Approximations of piecewise deterministic Markov processes and their convergence properties, Unnamed Item, Data-free likelihood-informed dimension reduction of Bayesian inverse problems, Nonparametric Bayesian inference for reversible multidimensional diffusions, Mixing time guarantees for unadjusted Hamiltonian Monte Carlo, Wasserstein-based methods for convergence complexity analysis of MCMC with applications


Uses Software


Cites Work