Modified conjugate gradient method for the solution of \(Ax=b\) (Q1281489): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
RedirectionBot (talk | contribs)
Removed claim: reviewed by (P1447): Item:Q170397
Property / reviewed by
 
Property / reviewed by: Q1266123 / rank
Normal rank
 

Revision as of 08:08, 10 February 2024

scientific article
Language Label Description Also known as
English
Modified conjugate gradient method for the solution of \(Ax=b\)
scientific article

    Statements

    Modified conjugate gradient method for the solution of \(Ax=b\) (English)
    0 references
    0 references
    0 references
    31 May 1999
    0 references
    To solve \(Ax=b\) (\(A\) is a symmetric positive definite matrix) by the classical conjugate gradient method, the solution \(x_k\) in the \(k\)th step is obtained by minimizing \(\| x-x_k\| _A\) for \(x_k\in K_k(A,b) =\text{span}\{b,Ab,\ldots,A^{k-1}b\}\) where \(\| x\| _A^2=(x,Ax)\). The error can be estimated in terms of \(\kappa(A)\), the condition number of \(A\). In this paper, it is proposed to solve the system by minimizing over the Krylov subspace \(K_k(\sqrt{A},b)\). Then the error can be bounded in terms of \(\kappa(\sqrt{A})\), which is smaller and hence gives a better rate of convergence. It is shown that, once \(b\) and \(\sqrt{A}b\) are computed, then the iterates are given by a short recurrence relation (as in the classical case) which requires only one \(A\)-multiplication per iteration step.
    0 references
    modified conjugate gradient method
    0 references
    Krylov subspace method
    0 references
    convergence
    0 references
    stability
    0 references
    condition number
    0 references

    Identifiers