Rates of convergence of the Hastings and Metropolis algorithms

From MaRDI portal
Revision as of 15:00, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1922398

DOI10.1214/AOS/1033066201zbMath0854.60065OpenAlexW1973594349WikidataQ56994734 ScholiaQ56994734MaRDI QIDQ1922398

Richard L. Tweedie, Kerrie L. Mengersen

Publication date: 5 January 1997

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1214/aos/1033066201





Related Items (only showing first 100 items - show all)

Exponential inequalities for unbounded functions of geometrically ergodic Markov chains: applications to quantitative error bounds for regenerative Metropolis algorithmsOn the convergence rates of some adaptive Markov chain Monte Carlo algorithmsNote on the Sampling Distribution for the Metropolis-Hastings AlgorithmMCMC Algorithms for Posteriors on Matrix SpacesMarkov Chain Importance Sampling—A Highly Efficient Estimator for MCMCUnnamed ItemComplexity bounds for Markov chain Monte Carlo algorithms via diffusion limitsImproved Sampling‐Importance Resampling and Reduced Bias Importance SamplingNumerical Results for the Metropolis AlgorithmNonasymptotic Bounds on the Mean Square Error for MCMC Estimates via Renewal TechniquesOn the convergence rate of the elitist genetic algorithm based on mutation probabilityState estimation for aoristic modelsConvergence of Position-Dependent MALA with Application to Conditional Simulation in GLMMsSpectral gaps and error estimates for infinite-dimensional Metropolis-Hastings with non-Gaussian priorsAnalysis of a Class of Multilevel Markov Chain Monte Carlo Algorithms Based on Independent Metropolis–HastingsNeuronized Priors for Bayesian Sparse Linear RegressionFinding our way in the dark: approximate MCMC for approximate Bayesian methodsA large deviation principle for the empirical measures of Metropolis-Hastings chainsScalable Optimization-Based Sampling on Function SpaceEstimation and inference by stochastic optimizationExact convergence analysis for metropolis–hastings independence samplers in Wasserstein distancesComparison of hit-and-run, slice sampler and random walk MetropolisBayesian Inverse Problems with $l_1$ Priors: A Randomize-Then-Optimize ApproachPathwise accuracy and ergodicity of metropolized integrators for SDEsAdaptive MCMC methods for inference on affine stochastic volatility models with jumpsUnnamed ItemUnnamed ItemConvergence of Conditional Metropolis-Hastings SamplersMarkov Chain Monte Carlo for Computing Rare-Event Probabilities for a Heavy-Tailed Random WalkOn the Use of Local Optimizations within Metropolis–Hastings UpdatesMarkov-chain monte carlo: Some practical implications of theoretical resultsApproximating Hidden Gaussian Markov Random FieldsOn the geometric ergodicity of Metropolis-Hastings algorithmsA Bayesian inference approach to the ill-posed Cauchy problem of steady-state heat conductionCoupling a stochastic approximation version of EM with an MCMC procedurePerfect Samplers for Mixtures of DistributionsEfficiency and Convergence Properties of Slice SamplersImproving Convergence of the Hastings–Metropolis Algorithm with an Adaptive ProposalA note on geometric ergodicity and floating-point roundoff errorErgodicity of Markov chain Monte Carlo with reversible proposalExact bound for the convergence of metropolis chainsTheoretical and numerical comparison of some sampling methods for molecular dynamicsBayesian computation: a summary of the current state, and samples backwards and forwardsAnnealing evolutionary stochastic approximation Monte Carlo for global optimizationComponent-wise Markov chain Monte Carlo: uniform and geometric ergodicity under mixing and compositionBayesian Robustness: A Nonasymptotic ViewpointStructure balance and opinions dynamic in signed social networkMixing of Metropolis-adjusted Markov chains via couplings: the high acceptance regimeApproximating the spectral gap of the Pólya-Gamma Gibbs samplerPseudo-perfect and adaptive variants of the Metropolis–Hastings algorithm with an independent candidate densityConvergence rates of Metropolis-Hastings algorithmsNoise-free sampling algorithms via regularized Wasserstein proximalsSimple conditions for metastability of continuous Markov chainsSensitivity and convergence of uniformly ergodic Markov chainsConductance bounds on the L2 convergence rate of Metropolis algorithms on unbounded state spacesEstimation of risk contributions with MCMCUnnamed ItemUsing a Markov Chain to Construct a Tractable Approximation of an Intractable Probability DistributionMetropolis Integration Schemes for Self-Adjoint DiffusionsLimit theorems for sequential MCMC methodsAnnealing evolutionary stochastic approximation Monte Carlo for global optimizationMaximum Likelihood Estimation of a Multi-Dimensional Log-Concave DensityParticle Markov Chain Monte Carlo MethodsComputation of Expectations by Markov Chain Monte Carlo MethodsUnnamed ItemStochastic gradient descent and fast relaxation to thermodynamic equilibrium: A stochastic control approachUsing simulation methods for bayesian econometric models: inference, development,and communicationSlow convergence of the Gibbs samplerPolynomial convergence rates of Markov chainsGeometric ergodicity of Metropolis algorithmsOn the influence of the proposal distributions on a reversible jump MCMC algorithm applied to the detection of multiple change-pointsWhich ergodic averages have finite asymptotic variance?Approximate Bayesian computation by modelling summary statistics in a quasi-likelihood frameworkPractical drift conditions for subgeometric rates of convergence.A new class of stochastic EM algorithms. Escaping local maxima and handling intractable samplingInformation-geometric Markov chain Monte Carlo methods using diffusionsConvergence rates of two-component MCMC samplersOracle lower bounds for stochastic gradient sampling algorithmsStochastic zeroth-order discretizations of Langevin diffusions for Bayesian inferenceOn the theoretical properties of the exchange algorithmExact convergence analysis of the independent Metropolis-Hastings algorithmsSaddlepoint approximation for the generalized inverse Gaussian Lévy processMultilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed ProposalsA computable bound of the essential spectral radius of finite range metropolis-Hastings kernelsBatch means and spectral variance estimators in Markov chain Monte CarloAnnealing stochastic approximation Monte Carlo algorithm for neural network trainingImproving the convergence of reversible samplersVariance bounding of delayed-acceptance kernelsStability of noisy Metropolis-HastingsOn the ergodicity properties of some adaptive MCMC algorithmsDirected hybrid random networks mixing preferential attachment with uniform attachment mechanismsMultiplicative random walk Metropolis-Hastings on the real lineParameter estimation of population pharmacokinetic models with stochastic differential equations: implementation of an estimation algorithmRates of convergence of the Hastings and Metropolis algorithmsConvergence properties of the Gibbs sampler for perturbations of GaussiansOn a Metropolis-Hastings importance sampling estimatorDimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian PriorsCentral limit theorem for triangular arrays of non-homogeneous Markov chainsSequentially interacting Markov chain Monte Carlo methodsOn the rate of convergence of the Gibbs sampler for the 1-D Ising model by geometric bound




Cites Work




This page was built for publication: Rates of convergence of the Hastings and Metropolis algorithms