An interruptible algorithm for perfect sampling via Markov chains (Q1296621): Difference between revisions
From MaRDI portal
Latest revision as of 20:25, 28 May 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | An interruptible algorithm for perfect sampling via Markov chains |
scientific article |
Statements
An interruptible algorithm for perfect sampling via Markov chains (English)
0 references
2 August 1999
0 references
For many examples from statistical physics and computer science one seeks to sample from a probability distribution on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an approximate normalizing constant. In case of Markov chain Monte Carlo approximate sampling approach one constructs and runs for a long time a Markov chain with long-run distribution \(\pi\). But determining how long is long enough to get a good approximation can be both analytically and empirically difficult. \textit{J. G. Propp} and \textit{D. B. Wilson} [Random Struct. Algorithms 9, No. 1/2, 223-252 (1996; Zbl 0859.60067)] gave an efficient algorithm to use the same Markov chains to produce perfect samples from \(\pi\). However, the running time of their algorithm is an unbounded random variable, the order of magnitude is usually unknown a priori and is not independent of the state sampled. The author presents a new algorithm which uses the same Markov chains to produce perfect samples from \(\pi\) and is based on the acceptance/rejection sampling and eliminates user-impatience bias. The new algorithm has running time of the same order as the Propp-Wilson one and uses only logarithmically more space.
0 references
Markov chain Monte Carlo
0 references
perfect simulation
0 references
rejection sampling
0 references
0 references