On the Choice of Alternative Measures in Importance Sampling with Markov Chains
DOI10.1287/OPRE.43.3.509zbMATH Open0842.60072OpenAlexW2019606417MaRDI QIDQ4861366FDOQ4861366
Authors: Sigrún Andradóttir, Daniel P. Heyman, Teunis J. Ott
Publication date: 1 August 1996
Published in: Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/opre.43.3.509
Recommendations
- Potentially unlimited variance reduction in importance sampling of Markov chains
- Importance sampling for markov chains: asymptotics for the variance
- Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains
- Examples comparing importance sampling and the Metropolis algorithm
- Importance Sampling for Stochastic Simulations
likelihood ratiosimulation of Markov chainsdistribution of the logarithm of the likelihood ratiosample path length
Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Sampling theory, sample surveys (62D05)
Cited In (5)
- Alternative proof and interpretations for a recent state-dependent importance sampling scheme
- The cross-entropy method with patching for rare-event simulation of large Markov chains
- Importance sampling for markov chains: asymptotics for the variance
- Examples comparing importance sampling and the Metropolis algorithm
- On the simulation of Markov chain steady-state distribution using CFTP algorithm
This page was built for publication: On the Choice of Alternative Measures in Importance Sampling with Markov Chains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4861366)