Sequential Monto Carlo techniques for the solution of linear systems (Q1891301)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Sequential Monto Carlo techniques for the solution of linear systems |
scientific article |
Statements
Sequential Monto Carlo techniques for the solution of linear systems (English)
0 references
8 September 1996
0 references
In this interesting and informative paper, the author introduces and analyzes a sequential Monte Carlo estimation process in the context of the solution to linear systems. The linear system \(Ax= b\) can be rearranged to \[ x= b+ Hx,\tag{1} \] where \(H= I- A\) which leads to an iterative solution \[ x= b+ Hb+ H^2 b+\cdots\;.\tag{2} \] A (plain) Monte Carlo estimation \(g\) of the \(i\)th component of this series can be obtained by an appropriate random walk (Markov process). Averaging \(w\) trials of the method yields an estimate \(G_w\) which has variance which tends to zero as \(w^{- 1}\) tends to zero. Now suppose \(y\) is an approximation of \(x\), then one is interested in finding an estimate of the difference \(z= x- y\). In general, the variance of the estimator for \(z\) will be much less than that for \(x\) directly. The sequential Monte Carlo technique systematically uses this idea as follows. Substituting \(x= y+ z\) into (1) gives \(z= d+ Hz\), where \(d= b+ Hy- y\), which is the same form as (2). Start with \(y= y^0= 0\) and generate an estimate \(G^0_{w_0}\) of \(z^0\). Now let \(y= y^1= y^0+ G^0_{w_0}\) and repeat the process to obtain the estimate \(G^1_{w_1}\) of \(z^1\). Continue in this way to obtain the successively improved estimators \(y^0, y^1,\dots\;\). It is proved that this technique possesses a very favorable overall variance which at the \(n\)th stage is \(\lambda^n\) times the initial variance, where \(\lambda< 1\). Additionally, the author presents the results of numerical experiments comparing the method against the Jacobi, Gauss-Seidel, conjugate gradient, and plain Monte Carlo methods for large linear systems demonstrating improvement factors in the number of steps required for solution on the order of thousands. Finally, the initial sections of the paper provide self-contained treatments of the Jacobi, Gauss-Seidel and conjugate gradient methods as well as an introduction to Monte Carlo processes.
0 references
Monte Carlo method
0 references
sequential sampling
0 references
Jacobi method
0 references
Gauss-Seidel method
0 references
conjugate gradient method
0 references
Markov process
0 references
Monte Carlo estimation
0 references
random walk
0 references
numerical experiments
0 references
large linear systems
0 references
0 references
0 references