Minimising MCMC variance via diffusion limits, with an application to simulated tempering (Q2443188)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Minimising MCMC variance via diffusion limits, with an application to simulated tempering
scientific article

    Statements

    Minimising MCMC variance via diffusion limits, with an application to simulated tempering (English)
    0 references
    0 references
    0 references
    0 references
    4 April 2014
    0 references
    If a Markov chain \(\{X_n\}\) has a stationary distribution \(\pi\), then \(\int h(x) \pi(dx)\) can often be estimated by \(n^{-1}\sum_{i=1}^n h(X_i)\) for suitably large \(n\). The efficiency of the estimate can be measured in terms of the asymptotic variance \(\mathrm{Var}(h,P)=\lim_{n \to \infty}n^{-1}\mathrm{Var}(\sum_{i=1}^n h(X_i))\), where \(P\) is transition kernel of the Markov chain. Given two Markov chain kernels \(P_1,P_2\), both having the same invariant measure \(\pi\), we say that \(P_1\) dominates \(P_2\) if \(\mathrm{Var}(h,P_1)\leq \mathrm{Var}(h,P_2)\) for all admissible \(h\). In the article, this approach is developed for the comparison of the asymptotic variance of Langevin diffusions. Sufficient conditions are given. The results are then applied to simulated tempering algorithms in high-dimensional spaces. Their limiting descriptions are proved to be Langevin diffusions, and the most efficient Markov chain Monte Carlo algorithm is described.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    Markov chain Monte Carlo
    0 references
    simulated tempering
    0 references
    optimal scaling
    0 references
    diffusion limits
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references