On a Metropolis-Hastings importance sampling estimator (Q2180048)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | On a Metropolis-Hastings importance sampling estimator |
scientific article |
Statements
On a Metropolis-Hastings importance sampling estimator (English)
0 references
13 May 2020
0 references
The article deals with the fundamental problem of calculating mathematical expectations w.r.t. to a partially unknown probability measure \(\mu\) on a measurable space \(G\) determined by a condition \(d\mu/d\mu_0(x) = Z^{-1}\rho(x)\), where \(x \in G\) and \(\mu_0\) is an a priori known reference measure on \(G\). The constant \(Z = \int_G \rho(x) \mu_0(dx) \in (0, \infty)\) is unknown. The goal is to compute \(\mathbf{E}_\mu(f)=\int_G f(x) \mu(dx)\) only by using evaluations of \(\rho\) and \(f\). For this purpose, one can simulate a Markov chain \((X_n)_{n\in \mathbb{N}}\) using the well-known Metropolis-Hastings (MH) algorithm and estimate \(\mathbf{E}_\mu(f)\) as \(S_n(f)=n^{-1}\sum_{k=1}^n f(X_k)\). However, an essential part of the MH algorithm is an acceptance/rejection step, i.e., given \(X_n\), first some \(Y_{n+1}\) is drawn and then the state \(X_{n+1}=Y_{n+1}\) is chosen only with a certain probability (otherwise, \(X_{n+1}=X_{n}\) is still). This causes highly correlated samples \((X_n)_{n\in \mathbb{N}}\). \par Proposed in the article, the MH importance sampling estimator uses samples \((X_n,Y_n)_{n \in \mathbb{N}}\) together, the estimate \(\mathbf{E}_\mu(f)\) is obtained as \[ A_n(f)=\frac{\sum_{k=1}^n w(X_k,Y_k)f(Y_k)}{\sum_{k=1}^n w(X_k,Y_k)}, \] where \(w(x,y)\) is the weight function. For this estimator, a strong law of large numbers and a central limit theorem are proved. In addition, an explicit mean squared error bound is provided. Unlike the classical estimator, the asymptotic variance of the MH importance sampling estimator does not involve any correlation term. Numerical experiments which compare \(S_n(f)\) and \(A_n(f)\) are given.
0 references
Metropolis-Hastings algorithm
0 references
importance sampling
0 references
Markov chains
0 references
variance reduction
0 references
central limit theorem
0 references