An efficient and stable Newton-type iterative method for computing generalized inverse \(A_{T,S}^{(2)}\) (Q2356070)

From MaRDI portal
Revision as of 12:56, 3 August 2023 by Importer (talk | contribs) (‎Created a new Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
An efficient and stable Newton-type iterative method for computing generalized inverse \(A_{T,S}^{(2)}\)
scientific article

    Statements

    An efficient and stable Newton-type iterative method for computing generalized inverse \(A_{T,S}^{(2)}\) (English)
    0 references
    28 July 2015
    0 references
    A new Newton-type iterative matrix method for finding the generalized outer inverse is presented. The author considers an arbitrary complex matrix \(A\in \mathbb{C}^{m\times n}\), \(\mathcal{R}(A)=T\), \(\mathrm{Ker}(A)=S\), \(\mathrm{rank}A=r\). His aim is to develop a new efficient Newton-type method for computing its generalized inverse (outer inverse) \(A^{(2)}_{T,S}\)\,. He says that in the case of large sparse matrices, iterative methods for the inverse computation are more suitable for implementation than direct methods. He gives us two examples: the Schulz scheme and the hyperpower iteration. The hyperpower iterations of order \(p\) for computing outer inverses require \(p\) matrix-matrix multiplications while the Schulz scheme needs two matrix-matrix multiplications to reach the quadratic convergence. For structure-like matrices, such as the Toeplitz and Hankel matrices, there is also the advantage that the matrix-matrix multiplication can be implemented as a matrix by vector products. The author defines the new scheme as a variant of the \(7\)th-order method derived from the hyperpower iterations. He says that an elegant factorization reduces the number of matrix-matrix multiplications dramatically. The new scheme can also be written in its dual form and then it is a variant of the Schulz scheme but it possesses higher computational efficiency. The convergence of the proposed iteration and an error bound for its rate of convergence are discussed. The author proves that under certain conditions imposed on the matrix \( A \) the convergence of the proposed iterative method is of seventh order if the initial approximation is appropriately selected. The author compares the efficiency of the new formulation with other existing iterative methods of the same type by making use of the so-called (computational) efficiency indices of different methods. It turns out that the new suggested iterative method is the best both from the theoretical viewpoint, as well as, it is economic for practical problems. The author also studies the stability of the new iterative method and illustrates the theoretical results by numerical examples. Computations are performed using the programming package Mathematica 8.
    0 references
    generalized inverse
    0 references
    outer inverse
    0 references
    Schulz scheme
    0 references
    hyperpower iteration
    0 references
    efficiency index
    0 references
    large sparse matrices
    0 references
    iterative methods
    0 references
    convergence
    0 references
    Toeplitz matrix
    0 references
    Hankel matrix
    0 references
    matrix-matrix multiplication
    0 references
    error bound
    0 references
    numerical example
    0 references
    programming package Mathematica 8
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references