Generalized inverses. Theory and applications. (Q1865753)

From MaRDI portal
Revision as of 11:30, 1 February 2024 by Import240129110113 (talk | contribs) (Added link to MaRDI item.)
scientific article
Language Label Description Also known as
English
Generalized inverses. Theory and applications.
scientific article

    Statements

    Generalized inverses. Theory and applications. (English)
    0 references
    0 references
    0 references
    31 March 2003
    0 references
    In recent years needs have been felt in numerous areas of applied mathematics for some kind of partial inverse of a matrix that is singular or even rectangular. Generalized inverses of matrices were first noted by E. H. Moore (1920), who defined a unique inverse for every constant matrix although generalized inverses of differential and integral operators have first been mentioned in print by Fredholm (1903), Hilbert (1904) e.t.c. . A summary of Moore's work is given in the Appendix of the book. In 1955 Penrose showed that Moore's inverse, for a given matrix \(A\), is the unique matrix \(X\) satisfying the four equations \[ \begin{aligned} AXA &= A, \tag{1}\\ XAX &= X, \tag{2}\\ (AX)^{\ast}&= AX,\tag{3}\\ ( XA)^{\ast}&= XA,\tag{4} \end{aligned} \] where the symbol \(\ast \) denotes the conjugate transpose. Due to the later discovery and its importance, this unique inverse is now called Moore-Penrose inverse. In the Introduction, the authors try to describe the transmission from the known inverse of constant and regular matrices to the generalized inverse of rectangular matrices. A historical note, on the discovery of the generalized inverse first for integral and differential operators (1903-1931) and then for constant matrices (1920-1955) is also given in the introduction. Chapter 0 contains preliminary results from Linear Algebra, that are used in successive chapters, such as scalar vectors, linear transformations and matrices, elementary operations and permutations, Hermite normal forms, Jordan and Smith normal forms e.t.c.. This chapter can be skipped in reading. Chapter 1 introduces the \(\{i,j,\dots,\) \(k\}\)-inverse as the inverse which satisfies equations \((i),(j),\dots,\) \(\dots,(k)\) among equations (1)--(4). Then, it studies the existence and constructions of various inverses i.e. \(\{1\}\)-inverses (known as pseudo inverse or generalized inverse), \(\{1,2\}\)-inverses (semi-inverse or reciprocal inverse), \(\{1,2,3\}\)-inverses, \( \{1,2,4\}\)-inverses, and \(\{1,2,3,4\}\)-inverse (Moore-Penrose inverse or general reciprocal inverse or generalized inverse). In Chapter 2, a characterization of various generalized inverses is given in terms of solutions of specific linear systems. Some other results presented in this chapter are the following: a) generalized inverses with prescribed range are constructed, b) restricted generalized inverses are defined and used in the solution of ``constrained'' linear equations, c) the Bott-Duffin inverse is defined and used in the solution of electrical network problems, and d) an application of \{1\}- and \{1,2\}-inverses in interval linear programming and the integral solution of linear equations are given respectively. In Chapter 3 various generalized inverses are characterized and studied in terms of their minimization properties with respect to the class of ellipsoidal (or weighted Euclidean) norms and the more general class of essentially strictly convex cones. An extremal property of the Bott-Duffin inverse with application to electrical networks is also given. Chapter 4 studies generalized inverses having some of the spectral properties i.e. properties related to eigenvalues and eigenvectors of the inverse of a nonsingular matrix. Only square matrices are considered, since only they have eigenvalues and eigenvectors. More specifically the chapter deals with the inverse \(X\) that satisfies the properties : \( A^{k}XA=A^{k}\), \(XAX=X\), \(AX=XA\) where \(k\) is the index of \(A\). This inverse is called Drazin inverse. The spectral properties of Drazin inverse are shown, while a particular case of Drazin inverse, the group inverse is also studied. Finally, the quasi-commuting inverse and the strong spectral inverse are also defined. In computing a generalized or ordinary inverse of a matrix, the size of the difficulty of the problem may be reduced if the matrix is partitioned into other submatrices. Chapter 5 studies generalized inverses of partitioned matrices and their application to the solution of linear equations. Intersections of linear manifolds are also studied in order to obtain common solutions of pairs of linear equations and to invert matrices partitioned by rows or columns. Chapter 6 studies the spectral theory for rectangular matrices. The authors are approaching the singular value decomposition (SVD) of rectangular matrices following the approach of \textit{C. Eckart} and \textit{G. Young} [Bull. Am. Math. Soc. 45, 118-121 (1939; Zbl 0020.19802)]. Some of the applications of the SVD are given and concern: a) the Schmidt approximation theorem that approximates an original matrix by lower rank matrices, provided that the error of approximation is acceptable, b) the polar decomposition theorem, c) the study of the principal angles between subspaces, d) the study of the behavior of the Moore-Penrose inverse of a perturbed matrix \(A+E\) and its dependence on \(A^{\dag }\) and on the ``erro'' \(E \), and e) the generalization by Penrose of the classical spectral theorem for normal matrices. Finally, a generalization of the SVD based on \textit{C. F. Van Loan} [SIAM J. Numer. Analysis 13, 76-83 (1976; Zbl 0338.65022)] is described and concerns the simultaneous diagonalization of two \(n\)-columned matrices. Chapter 7 proposes computational methods for the unrestricted \{1\}- and \{1,2\}-inverses, \{2\}-inverses and the Moore-Penrose inverse. Two iterative methods are used for the computation of the Moore-Penrose inverse: a) the Greville's method that is a finite iterative method, and b) an iterative method that produces sequences of matrices \(\{ X_{k},k=1,2,\dots\} \) that converges to the Moore-Penrose inverse \( A^{\dag }\) as \(k\to \infty \), under certain intial approximations. Chapter 8 presents a selection of few applications that illustrate the richness and potential of generalized inverses. The list of applications includes: a) the important operation of parallel sum with application in electrical networks etc., b) the linear statistical model, c) the Newton-method for solution of nonlinear equations, without regarding the nonsingularity of the Jacobian matrix, d) the solution of continuous-time auto regressive (AR) representations, e) the properties of the transition matrix of a finite Markov chain, and f) the solution of singular linear difference equations. Finally, the last two sections deal with the matrix volume and its application to surface integrals and probability distributions. Chapter 9 is a brief and biased introduction to generalized inverses of linear operators between Hilbert spaces, with special emphasis on the similarities to the finite-dimensional case. The results have been applied to integral and differential operators. Integral and series representations of generalized inverses as well as iterative methods for their computation were given in the sequel. Minimal properties of generalized inverses of operators between Hilbert spaces, analogous to the matrix case, have also been studied. The new material that was added in this second edition (the first edition was in 1974; Zbl 0305.15001), is the preliminary chapter (Chapter 0), the chapter of applications (Chapter 8), an Appendix on the work of E. H. Moore and new exercises and applications. Each chapter is accompanied by suggestions for further reading, while the bibliography contains 901 references. This bibliography has also been posted by the authors in the Web page of the International Linear Algebra Society http://www.math.technion.ac.il//iic/research.html and updated from time to time. The book contains more than 450 exercises at different levels of difficulty, many of which are solved in detail. This feature makes it suitable either for reference and self-study or for use as a classroom text. It can be used profitably by graduate students or advanced undergraduate students, only elementary knowledge of linear algebra being assumed.
    0 references
    generalized inverse
    0 references
    pseudoinverse
    0 references
    Moore-Penrose inverse
    0 references
    reciprocal inverse
    0 references
    matrix functions
    0 references
    linear systems
    0 references
    constrained linear systems
    0 references
    Drazin inverse
    0 references
    spectral theory
    0 references
    singular value decomposition
    0 references
    factorization
    0 references
    textbook
    0 references
    historical note
    0 references
    Bott-Duffin inverse
    0 references
    interval linear programming
    0 references
    parallel sum
    0 references
    linear statistical model
    0 references
    Newton-method
    0 references
    auto regressive representation
    0 references
    finite Markov chain
    0 references
    singular linear difference equations
    0 references
    linear operators
    0 references
    Hilbert spaces
    0 references
    iterative methods
    0 references
    bibliography
    0 references
    exercises
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references