Gaussian fluctuations for linear eigenvalue statistics of products of independent iid random matrices (Q785408)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Gaussian fluctuations for linear eigenvalue statistics of products of independent iid random matrices |
scientific article |
Statements
Gaussian fluctuations for linear eigenvalue statistics of products of independent iid random matrices (English)
0 references
6 August 2020
0 references
The paper under review shows the universal limiting variance in the sense of independent iid random matrices, and generalizes and improves [\textit{B. Rider} and \textit{J. W. Silverstein}, Ann. Probab. 34, No. 6, 2118--2143 (2006; Zbl 1122.15022)] and the second author and \textit{D. Renfrew}'s [J. Theor. Probab. 29, No. 3, 1121--1191 (2016; Zbl 1388.60031)] previous result on linear spectral statistics of a single iid matrix. Background and previous related results are detailed in Section 1. The main result of the paper is Theorem 2.2 (fluctuations of linear statistics for products of iid random matrices) that there exist deterministic sequences \(\{A_n(f_i)\}\) such that the random vector \[ tr f_i(P_n/\sigma) - A_n(f_i) \to F(f_i), \ \ i=1, 2, \dots, s\] in distribution to a mean-zero multivariate Gaussian random vector \((F(f_i))_{1\le i \le s}\) with prescribed variance and covariance, where \(P_n = \prod_{j=1}^m (\frac{1}{\sigma_j}X_{n, j})\) and \(\sigma = \prod_{j=1}^m \sigma_j\) for independent iid \(n\times n\) random matrices with atom variables \(\xi_j, j=1, 2, \cdots, m\). The deterministic sequences \(\{A_n(f_i)\}\) are centering terms as expectation in a usual limiting result, and the variance and covariance terms are the same as in the case of a single iid matrix. The whole paper is devoted to the proof of Theorem 2.2 following the second author and Renfrew's [loc. cit.] and Rider and Silverstein's [loc. cit.] main ideas and techniques. For example, the entries of \(P_n\) are no longer independent for \(m>1\), and linearized matrix block has to be introduced to get around this issue. The main challenge in analyzing linear statistics of non-Hermitian random matrices is computing the limiting variance. Section 3 is devoted to introduce some additional concepts and notations for the proof of Theorem 2.2. The linearization of product into a block matrix (\({\mathcal M}_{i, i+1} =M_i, i \pmod m\) and zero otherwise) has \(\det ({\mathcal M}^m - zI) =\left( \det (M_1\cdot M_2\cdots M_m - z I)\right)^m\) for every complex \(z\). Section 4 starts to truncate the entries in each factor matrix and applies the linearization techniques. Theorem 4.1 states the limiting variance in terms of spectrum relation from matrix algebra and Theorem 2.2 follows from Theorem 4.1 by Cramer-Wold device and technique results in Appendix for smallest and largest singular values of a matrix. Assumption 2.1 refers to real-valued random variables \(\xi_j (1\le j \le m)\) with mean zero and nonzero variance \(\sigma_j^2\) and \(E|\xi_j|^{4+\tau} <\infty\) for some \(\tau > 0\). Lemma 4.3, Lemma 4.5--4.11 are results of truncated iid matrices and random variables satisfying Assumption 2.1. The truncation \(\tilde{\xi}\) on \(|\xi|\le n^{1/2-\varepsilon}\) and \(\hat{\xi} = \frac{\tilde{\xi}}{\sqrt{Var (\tilde{\xi})}}\) has controls on variation of \(\tilde{\xi}\) and relation with \(\xi\) on 4-th moment in Lemma 4.3, and extends to the iid random matrix in Lemma 4.5--4.11 to control expectation, probability and \(L^2\)-norm \(E\|\hat{X}_n - \dot{X}_n\|_2^2 = o(1)\), and the corresponding product of matrices \(E\|\dot{P}_n-\hat{P}_n\|_2^2 = o (\frac{1}{n})\). Theorem 4.12 is a version of the main result Theorem 4.1 for the truncated iid random matrices, and Theorem 4.13 is passing from the truncated result Theorem 4.12 to the non-truncated Theorem 4.1 through the linearization procedure. The proof of Theorem 4.1 follows from Theorem 4.12 and Theorem 4.13 and some technique lemmas in Appendix B. The proof of Theorem 4.12 is given in the end of Section 4, and Section 5 is devoted to prove Theorem 4.13. Theorem 5.2 shows the limiting distribution of the resolvent process of the difference between the trace of resolvent and the expectation of the trace resolvent, and the proof of Theorem 5.2 consists of showing that two conditions from Theorem 5.3 are satisfied. Section 6 proves the convergence of the finite-dimensional distributions and Section 7 proves the tightness of the sequence of stochastic process in the resolvent and the difference of resolvent with its expectation. Section 5--7 use heavily ideas and techniques from [Rider and Silverstein, loc. cit.; the second author and Renfrew, loc. cit.] and the Cramer-Wold device for convergence. The main result, Theorem 2.2, follows from Theorem 4.1 and the Cramer-Wold device. It would be interesting to extend the result to other algebraic expressions (other than product) of independent iid random matrices.
0 references
random matrices
0 references
linear eigenvalue statistics
0 references
non-Hermitian random matrices
0 references
iid random matrices
0 references
product matrices
0 references