Numerical approximation of the product of the square root of a matrix with a vector (Q1978127)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Numerical approximation of the product of the square root of a matrix with a vector |
scientific article |
Statements
Numerical approximation of the product of the square root of a matrix with a vector (English)
0 references
11 November 2001
0 references
Given a positive definite matrix \(A\)\ and a vector \(c\), in order to calculate the product \(A^{1/2} c ,\) the paper presents two methods that approximate iteratively the final product, instead of computing it after having approximated\ \(A^{1/2}\). In the two methods it is assumed that \(A\) has been reduced to tridiagonal form, an operation that implies \(O(n^3)\) operations, but only once. The first method proposed requires \(O(n^2)\) operations per iteration, whereas for the second one this figure is reduced to \(O(n)\). The first method is based on a modification of Newton's method, with the iterates \(x_k\) subjected to the constraints \(x_k^Tx_k=c^TAc\). Local convergence of the method is proved and, to overcome the difficulty associated with ill-conditioning of the Jacobian matrices, a special Krylov subspace procedure based on the Lanczos method is developed. As for the second method, it is based on a very simple but powerful idea. The function \(x(t):=(tA+(1-t)I){1/2}c\), \(0\leq t \leq 1\), is the solution of the initial value problem \[ dx/dt=-\frac {1}{2}(tA+(1-t)I)^{-1}(I-A)x(t) , \] with \(x(0)=c\). In fact, it is proved that the solution for this problem is \(x(t)\), where the square roots involved are the unique positive definite ones. To calculate approximately \(x(1)\), a step-control procedure based on the Runge-Kutta-Fehlberg method is employed. The paper reports the results obtained on an interesting set of matrices \(A\), including the Hilbert matrix of various dimensions. The first method yields good results in rather few cases, while the second one does yield good results in all of them. Moreover, these results also compare favourably with another uniformly successful approach based on approximating \(A^{1/2}\) with a third order method.
0 references
matrix square root
0 references
iterative method
0 references
Newton's method
0 references
initial value problem
0 references
convergence
0 references
Krylov subspace procedure
0 references
Lanczos method
0 references
Runge-Kutta-Fehlberg method
0 references
Hilbert matrix
0 references
0 references