Optimal scaling of random-walk Metropolis algorithms on general target distributions
From MaRDI portal
Publication:2196541
DOI10.1016/j.spa.2020.05.004zbMath1455.60100arXiv1904.12157OpenAlexW3026366478MaRDI QIDQ2196541
Jeffrey S. Rosenthal, Gareth O. Roberts, Jun Yang
Publication date: 3 September 2020
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1904.12157
Related Items
Optimal scaling of random walk Metropolis algorithms using Bayesian large-sample asymptotics, Complexity results for MCMC derived from quantitative bounds, Conditional sequential Monte Carlo in high dimensions, Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler, Optimal scaling of MCMC beyond Metropolis, Ensemble-Based Gradient Inference for Particle Methods in Optimization and Sampling, Counterexamples for optimal scaling of Metropolis-Hastings chains with rough target densities, Inverse Optimal Transport, Efficiency of delayed-acceptance random walk metropolis algorithms, Randomized Hamiltonian Monte Carlo as scaling limit of the bouncy particle sampler and dimension-free convergence rates
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Diffusion limits of the random walk Metropolis algorithm in high dimensions
- Optimal scaling for the transient phase of Metropolis Hastings algorithms: the longtime behavior
- Optimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets
- Towards optimal scaling of Metropolis-coupled Markov chain Monte Carlo
- Optimal scaling of random walk Metropolis algorithms with non-Gaussian proposals
- Markov chains and stochastic stability
- Optimal scaling of random walk Metropolis algorithms with discontinuous target densities
- Scaling analysis of multiple-try MCMC methods
- Markov chain Monte Carlo: can we trust the third significant figure?
- The random walk Metropolis: linking theory and practice through a case study
- Optimal scaling for random walk Metropolis on spherically constrained target densities
- Optimal scaling for partially updating MCMC algorithms
- Conditions for rapid mixing of parallel and simulated tempering on multimodal distributions
- Sufficient conditions for torpid mixing of parallel and simulated tempering
- Approximate counting, uniform generation and rapidly mixing Markov chains
- Simple conditions for the convergence of the Gibbs sampler and Metropolis-Hastings algorithms
- Computable bounds for geometric convergence rates of Markov chains
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Honest exploration of intractable probability distributions via Markov chain Monte Carlo.
- Optimal scaling for various Metropolis-Hastings algorithms.
- Bounds on regeneration times and convergence rates for Markov chains
- A Dirichlet form approach to MCMC optimal scaling
- Inference from iterative simulation using multiple sequences
- Renewal theory and computable convergence rates for geometrically erdgodic Markov chains
- Quantitative convergence rates of Markov chains: A simple account
- From Metropolis to diffusions: Gibbs states and optimal scaling.
- Optimal scaling of MaLa for nonlinear regression.
- Sufficient burn-in for Gibbs samplers for a hierarchical random effects model.
- Rates of convergence of the Hastings and Metropolis algorithms
- Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions
- Hierarchical models and tuning of random walk Metropolis algorithms
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- Non-stationary phase of the MALA algorithm
- Diffusion limit for the random walk Metropolis algorithm out of stationarity
- On the efficiency of pseudo-marginal random walk Metropolis algorithms
- Optimal scaling for the transient phase of the random walk Metropolis algorithm: the mean-field limit
- Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions
- Optimal tuning of the hybrid Monte Carlo algorithm
- Minimising MCMC variance via diffusion limits, with an application to simulated tempering
- Weak convergence of Metropolis algorithms for non-I.I.D. target distributions
- Monte Carlo strategies in scientific computing.
- Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
- Inverse problems: A Bayesian perspective
- Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits
- MCMC METHODS FOR DIFFUSION BRIDGES
- Handbook of Markov Chain Monte Carlo
- Optimal scaling of Metropolis algorithms: Heading toward general target distributions
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- Optimal scaling of the random walk Metropolis algorithm under Lp mean differentiability
- Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo
- Equation of State Calculations by Fast Computing Machines
- Scaling Limits for the Transient Phase of Local Metropolis–Hastings Algorithms
- Hit-and-Run from a Corner
- Monte Carlo sampling methods using Markov chains and their applications
- The complexity of theorem-proving procedures
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- Graphical models
- MCMC methods for functions: modifying old algorithms to make them faster