Optimal Scaling of Discrete Approximations to Langevin Diffusions
From MaRDI portal
Publication:4214235
DOI10.1111/1467-9868.00123zbMath0913.60060OpenAlexW2029164135WikidataQ56700916 ScholiaQ56700916MaRDI QIDQ4214235
Jeffrey S. Rosenthal, Gareth O. Roberts
Publication date: 6 January 1999
Published in: Journal of the Royal Statistical Society Series B: Statistical Methodology (Search for Journal in Brave)
Full work available at URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.3390
Monte Carlo methods (65C05) Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20)
Related Items
A blocking scheme for dimension-robust Gibbs sampling in large-scale image deblurring, Connecting the Dots: Numerical Randomized Hamiltonian Monte Carlo with State-Dependent Event Rates, Nonreversible Jump Algorithms for Bayesian Nested Model Selection, Model-Based Edge Clustering, Unnamed Item, Spatial Shrinkage Via the Product Independent Gaussian Process Prior, Scaling Limits for the Transient Phase of Local Metropolis–Hastings Algorithms, Bayes analysis of the generalized gamma AFT models for left truncated and right censored data, Reflections on Bayesian inference and Markov chain Monte Carlo, Robustifying models against adversarial attacks by Langevin dynamics, Geostatistical Methods for Disease Mapping and Visualisation Using Data from Spatio‐temporally Referenced Prevalence Surveys, A reduced-rank approach to predicting multiple binary responses through machine learning, Convergence of Position-Dependent MALA with Application to Conditional Simulation in GLMMs, Complexity results for MCMC derived from quantitative bounds, Mixing of MCMC algorithms, Scaling Up Bayesian Uncertainty Quantification for Inverse Problems Using Deep Neural Networks, Gradient-Based Markov Chain Monte Carlo for Bayesian Inference With Non-differentiable Priors, Accelerating inference for stochastic kinetic models, Non-reversible guided Metropolis kernel, Data augmentation for Bayesian deep learning, A large deviation principle for the empirical measures of Metropolis-Hastings chains, Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler, A statistical approach to estimating adsorption-isotherm parameters in gradient-elution preparative liquid chromatography, Bayesian Modeling of Sequential Discoveries, Bayesian time‐varying autoregressive models of COVID‐19 epidemics, A fresh Take on ‘Barker Dynamics’ for MCMC, NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators, Optimal scaling of MCMC beyond Metropolis, Multilevel Delayed Acceptance MCMC, Convergence of unadjusted Hamiltonian Monte Carlo for mean-field models, Efficient and generalizable tuning strategies for stochastic gradient MCMC, Scalable Bayesian computation for crossed and nested hierarchical models, Modeling volatility for high-frequency data with rounding error: a nonparametric Bayesian approach, Adaptive tuning of Hamiltonian Monte Carlo within sequential Monte Carlo, A non-stationary spatial generalized linear mixed model approach for studying plant diversity, Informed Proposals for Local MCMC in Discrete Spaces, Error Bounds and Normalising Constants for Sequential Monte Carlo Samplers in High Dimensions, Partial differential equations and stochastic methods in molecular dynamics, Stochastic Payments per Claim Incurred, Theoretical and numerical comparison of some sampling methods for molecular dynamics, Asymptotic variance for random walk Metropolis chains in high dimensions: logarithmic growth via the Poisson equation, Particle Metropolis-Hastings using gradient and Hessian information, Efficient computational strategies for doubly intractable problems with applications to Bayesian social networks, Bayesian computation: a summary of the current state, and samples backwards and forwards, MCMC methods for functions: modifying old algorithms to make them faster, INLA or MCMC? A tutorial and comparative evaluation for spatial prediction in log-Gaussian Cox processes, Unnamed Item, Ensemble Transport Adaptive Importance Sampling, Geometric integrators and the Hamiltonian Monte Carlo method, Bayesian inversion by parallel interacting Markov chains, Stochastic Gradient Markov Chain Monte Carlo, Skew brownian motion and complexity of the alps algorithm, Stochastic Gradient MCMC for State Space Models, Decreasing Flow Uncertainty in Bayesian Inverse Problems Through Lagrangian Drifter Control, An efficient proposal distribution for Metropolis-Hastings using a \(B\)-splines technique, Calibrate, emulate, sample, Optimal scaling of MaLa for nonlinear regression., Optimal scaling of random walk Metropolis algorithms using Bayesian large-sample asymptotics, Black-Litterman model for continuous distributions, Sequential Monte Carlo EM for multivariate probit models, Efficient MCMC for temporal epidemics via parameter reduction, Piecewise deterministic Markov processes for continuous-time Monte Carlo, Merging MCMC subposteriors through Gaussian-process approximations, Designing simple and efficient Markov chain Monte Carlo proposal kernels, Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference, An adaptive multiple-try Metropolis algorithm, Computation of Gaussian orthant probabilities in high dimension, Dimension-independent likelihood-informed MCMC, Weighted multilevel Langevin simulation of invariant measures, Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction, Neglected chaos in international stock markets: Bayesian analysis of the joint return-volatility dynamical system, Robust beta regression modeling with errors-in-variables: a Bayesian approach and numerical applications, Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits, Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions, FEM-based discretization-invariant MCMC methods for PDE-constrained Bayesian inverse problems, Non-reversible Metropolis-Hastings, Hierarchical models: local proposal variances for RWM-within-Gibbs and MALA-within-Gibbs, On sampling from a log-concave density using kinetic Langevin diffusions, A Dirichlet form approach to MCMC optimal scaling, Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions, Markov chain Monte Carlo inference for Markov jump processes via the linear noise approximation, Optimal strategies for the control of autonomous vehicles in data assimilation, Adaptive Euler-Maruyama method for SDEs with nonglobally Lipschitz drift, Optimal scaling of the MALA algorithm with irreversible proposals for Gaussian targets, Accelerating Metropolis-within-Gibbs sampler with localized computations of differential equations, Optimal scaling of random-walk Metropolis algorithms on general target distributions, The stochastic quasi-chemical model for bacterial growth: variational Bayesian parameter update, Adaptive Gibbs samplers and related MCMC methods, Diffusion limits of the random walk Metropolis algorithm in high dimensions, Optimal tuning of the hybrid Monte Carlo algorithm, Iterative Importance Sampling Algorithms for Parameter Estimation, Towards optimal scaling of Metropolis-coupled Markov chain Monte Carlo, Markov chain Monte Carlo algorithms with sequential proposals, Approximate forward-backward algorithm for a switching linear Gaussian model, Minimising MCMC variance via diffusion limits, with an application to simulated tempering, Error bounds for Metropolis-Hastings algorithms applied to perturbations of Gaussian measures in high dimensions, Localization for MCMC: sampling high-dimensional posterior distributions with local structure, LGM split sampler: an efficient MCMC sampling scheme for latent Gaussian models, Bayesian adaptation of chaos representations using variational inference and sampling on geodesics, Optimal scaling for the transient phase of Metropolis Hastings algorithms: the longtime behavior, Optimal scaling for various Metropolis-Hastings algorithms., Simulator-free solution of high-dimensional stochastic elliptic partial differential equations using deep neural networks, Optimal scaling for random walk Metropolis on spherically constrained target densities, Langevin diffusions and the Metropolis-adjusted Langevin algorithm, MALA-within-Gibbs Samplers for High-Dimensional Distributions with Sparse Conditional Structure, Parallel Local Approximation MCMC for Expensive Models, X-TMCMC: adaptive kriging for Bayesian inverse modeling, Informed reversible jump algorithms, Langevin Diffusion for Population Based Sampling with an Application in Bayesian Inference for Pharmacodynamics, Weak convergence of Metropolis algorithms for non-I.I.D. target distributions, Langevin Dynamics With General Kinetic Energies, Counterexamples for optimal scaling of Metropolis-Hastings chains with rough target densities, On the limitations of single-step drift and minorization in Markov chain convergence analysis, Efficient strategy for the Markov chain Monte Carlo in high-dimension with heavy-tailed target probability distribution, An adaptive approach to Langevin MCMC, f-SAEM: a fast stochastic approximation of the EM algorithm for nonlinear mixed effects models, On the Use of Local Optimizations within Metropolis–Hastings Updates, Markov-chain monte carlo: Some practical implications of theoretical results, Efficient Construction of Reversible Jump Markov Chain Monte Carlo Proposal Distributions, Hybrid Monte Carlo on Hilbert spaces, Adaptive Thermostats for Noisy Gradient Systems, User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient, Making inference of British household's happiness efficiency: a Bayesian latent model, Bayesian density estimation from grouped continuous data, Path sampling with stochastic dynamics: some new algorithms, Stochastic seismic waveform inversion using generative adversarial networks as a geological prior, Optimal scaling for partially updating MCMC algorithms, On the stability of sequential Monte Carlo methods in high dimensions, Efficient Markov Chain Monte Carlo Methods for Decoding Neural Spike Trains, Random walk Metropolis algorithm in high dimension with non-Gaussian target distributions, Non-asymptotic guarantees for sampling by stochastic gradient descent, Bayesian Prediction of Spatial Count Data Using Generalized Linear Mixed Models, Bayesian network marker selection via the thresholded graph Laplacian Gaussian prior, Optimal scaling of Metropolis algorithms: Heading toward general target distributions, Weight-preserving simulated tempering, Sampling from manifold-restricted distributions using tangent bundle projections, Efficiency of delayed-acceptance random walk metropolis algorithms, A null space method for over-complete blind source separation, A piecewise deterministic Monte Carlo method for diffusion bridges, MALA with annealed proposals: a generalization of locally and globally balanced proposal distributions, Accelerated Dimension-Independent Adaptive Metropolis, Non-stationary phase of the MALA algorithm, Geometric adaptive Monte Carlo in random environment, Randomized Hamiltonian Monte Carlo as scaling limit of the bouncy particle sampler and dimension-free convergence rates, Hybrid Monte Carlo methods for sampling probability measures on submanifolds, Adaptive random neighbourhood informed Markov chain Monte Carlo for high-dimensional Bayesian variable selection, Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise., Diffusion limit for the random walk Metropolis algorithm out of stationarity, On the efficiency of pseudo-marginal random walk Metropolis algorithms, A Randomized Maximum A Posteriori Method for Posterior Sampling of High Dimensional Nonlinear Bayesian Inverse Problems, Automatic zig-zag sampling in practice, Joint estimation of Robin coefficient and domain boundary for the Poisson problem, Generalization of discrete-time geometric bounds to convergence rate of Markov processes on Rn, Optimal scaling for the transient phase of the random walk Metropolis algorithm: the mean-field limit
Uses Software