Introduction to matrix analysis and applications (Q5920270)

From MaRDI portal
Revision as of 22:04, 2 February 2024 by Import240129110113 (talk | contribs) (Added link to MaRDI item.)
scientific article; zbMATH DE number 6255370
Language Label Description Also known as
English
Introduction to matrix analysis and applications
scientific article; zbMATH DE number 6255370

    Statements

    Introduction to matrix analysis and applications (English)
    0 references
    0 references
    0 references
    0 references
    5 February 2014
    0 references
    The main aim of this book is to explain certain important topics in matrix analysis from the point of view of functional analysis. Although the concept of Hilbert spaces appears frequently, only finite-dimensional spaces are used. The book treats some aspects of matrix analysis including topics as matrix monotone functions, matrix means, majorization, entropies, quantum Markov triplets, the Cramer-Rao inequality, and so on. The authors distribute the contents of the book into seven chapters entitled as follow: {\parindent=6mm \begin{itemize}\item[1.] Fundamentals of operators and matrices \item[2.] Mappings and algebras \item[3.] Functional calculus and derivation \item[4.] Matrix monotone functions and convexity \item[5.] Matrix means and inequalities \item[6.] Majorization and singular values \item[7.] Some applications. \end{itemize}} Chapters 1--3 are devoted to introduce basic concepts and tools on complex matrices and operators. Chapters 4--7 contain a number of more advanced and less well-known topics. I agree with the authors that the best use for this part of the book is a reference for active researchers in the field of quantum information theory. Each chapter finishes with a section of notes and remarks and with an interesting collection of unresolved exercises. It is known that a linear mapping is essentially a matrix if the vector spaces are finite-dimensional. As we have said, in this book the authors work basically with finite-dimensional complex Hilbert spaces. Chapter 1 collects basic concepts and tools on matrices and operators. The polar and spectral decompositions useful in studying operators on Hilbert spaces are also essential for complex matrices. Several results in this area are described, in particular, the canonical Jordan form of a square matrix is shown in Section 1.3. Among the most basic notions of matrices, introduced in this chapter, are eigenvalues, singular values, trace, determinant, \dots. A less elementary but important subject is tensor products, in particular, the Kronecker product of matrices, discussed in the last section. Chapter 2 covers block matrices, partial ordering and an elementary theory of von Neumann algebras in the finite-dimensional setting. It is known that the idea of block matrices provides quite a useful tool in matrix theory. Some basic facts on block matrices, such as Schur factorization, \(UL\)-factorization, \dots, are presented. One of the primary structures for matrices is the order structure coming from the partial order of positive semidefiniteness as the authors explain in this chapter. Based on this order several notions of positivity for linear maps between matrix algebras are analyzed. This material includes Kadison's inequality and completely positive mappings. Chapter 3 details matrix functional calculus. Functional calculus provides a new matrix \(f(A)\), when a matrix \(A\) and a function \(f\) are given, from the Cauchy integral \[ f(A)=\frac{1}{2 \pi i}\int_{\Gamma} f(z)(zI-A)^{-1}dz, \] if \(f\) is holomorphic in a domain \(G\) containing the eigenvalues of \(A\), where \(\Gamma\) is a simple closed contour in \(G\) surrounding the eigenvalues of \(A\). A typical example is the exponential function \(e^A= \sum_{n=0}^{\infty} \frac{A^n}{n!}\). Some results related to this function are presented. If \(f\) is sufficiently smooth, then \(f(A)\) is also smooth and we have a useful Fréchet differential formula. The last section is devoted to describe some results of the Fréchet derivative of a matrix function \(F\) defined on the square self-adjoint matrices. In Chapter 4, the authors describe the relationships between the matrix monotone functions and the convexity. A real function \(f\) defined on an interval \(I\) is matrix monotone if \(A \leq B\) implies \(f(A) \leq f(B)\) for Hermitian matrices \(A\) and \(B\) whose eigenvalues are in \(I\). In real analysis, monotonicity and convexity are not directly related, but in matrix analysis the situation is different. For example, a matrix monotone function on \((0, \infty)\) is matrix concave. Matrix monotone and matrix convex functions have several applications, but for a concrete function it is not so easy to verify its matrix monotonicity or matrix convexity. Such functions are typically described in terms of integral formulas. This is the case of Pick functions, introduced in Section 4.3, which is a concept related to the matrix monotonicity property. The main aim of the last section of this chapter is to prove the primary result in Löwner's theory, which says that a matrix monotone function on \((a,b)\) belongs to the set of all Pick functions which admit a continuous extension to \(\mathbb{C}^{+} \cup (a,b)\) with real values on \((a,b)\). The numerical means have been a very studied subject and the inequalities \[ \frac{2ab}{a+b} \leq \sqrt{ab} \leq \frac{a+b}{2}, \] between the harmonic, geometric and arithmetic means of positive numbers are well-known. Matrix extensions of the arithmetic mean and the harmonic mean are rather easy, however, it is non-trivial to define a matrix version of the geometric mean. In Chapter 5, the authors generalize the geometric mean to positive matrices and several other means are studied in terms of matrix monotone functions. A general theory of matrix means developed by \textit{F. Kubo} and \textit{T. Ando} [Math. Ann. 246, 205--224 (1980; Zbl 0412.47013)] is closely related to operator monotone functions on \((0, \infty)\). There are also more complicated means as the mean transformation \(M(A,B)=m(\mathbb{L}_A, \mathbb{R}_B)\), defined as a mean of the left-multiplication \(\mathbb{L}_A\) and the right-multiplication \(\mathbb{R}_B\), described by the authors in Section 5.4. Another useful concept is a multivariable extension of two-variable matrix means. Chapter 6 discusses majorizations for eigenvalues and singular values of matrices. After having presented this classical concept on vectors in Section 6.1, the notion in matrix theory is exposed in Sections 6.2 and 6.3. Basic properties of singular values of matrices are given in Section 6.2. This section also contains several fundamental majorizations, such as the Lidskii-Wielandt and Gelfand-Naimark theorems, for the eigenvalues of Hermitian matrices and the singular values of general matrices. Section 6.3 analyzes the subject of symmetric or unitarily invariant norms of matrices. Several famous majorizations for matrices, which have strong applications to matrix norm inequalities in symmetric norms, are described. Section 6.4 further collects several more recent results on majorization for positive matrices involving concave or convex functions, or operator monotone functions, or certain matrix means. For instance, the symmetric norm inequalities of Golden-Thompson type and of its complementary type are presented. The last chapter of this book contains topics related to quantum applications. Positive matrices with trace 1, also called density matrices, are the states in quantum theory. One of the most important concepts in probability theory is the Markov property. This concept is discussed in the first section in the setting of Gaussian probabilities. The structure of covariance matrices for Gaussian probabilities with the Markov property is in connection with the Boltzmann entropy. Its quantum analogue in the setting of CCR-algebras is the subject of Section 7.3. In Section 7.4, the authors study some results concerning how to construct optimal quantum measurements. The last section is concerned with the quantum version of the Cramer-Rao inequality, which is a certain matrix inequality between a kind of generalized variance and the quantum Fisher information.
    0 references
    matrix analysis
    0 references
    partial ordering
    0 references
    subalgebras
    0 references
    matrix functions
    0 references
    matrix monotone functions
    0 references
    convexity
    0 references
    matrix means
    0 references
    majorization
    0 references
    quantum theory
    0 references
    textbook
    0 references
    matrix exponential
    0 references
    functional calculus
    0 references
    singular value
    0 references
    quantum information theory
    0 references
    canonical Jordan form
    0 references
    eigenvalue
    0 references
    trace
    0 references
    determinant
    0 references
    tensor product
    0 references
    Kronecker product
    0 references
    von Neumann algebra
    0 references
    Schur factorization
    0 references
    \(UL\)-factorization
    0 references
    Kadison's inequality
    0 references
    completely positive mapping
    0 references
    Pick function
    0 references
    Hermitian matrices
    0 references
    unitarily invariant norm
    0 references
    matrix norm inequalities
    0 references
    positive matrices
    0 references
    Boltzmann entropy
    0 references
    Cramer-Rao inequality
    0 references
    quantum Fisher information
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references