Central limit theorem for linear eigenvalue statistics for a tensor product version of sample covariance matrices (Q1661592): Difference between revisions
From MaRDI portal
Created claim: Wikidata QID (P12): Q59522764, #quickstatements; #temporary_batch_1711094041063 |
Changed an Item |
||
Property / arXiv ID | |||
Property / arXiv ID: 1602.08613 / rank | |||
Normal rank |
Revision as of 20:29, 18 April 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Central limit theorem for linear eigenvalue statistics for a tensor product version of sample covariance matrices |
scientific article |
Statements
Central limit theorem for linear eigenvalue statistics for a tensor product version of sample covariance matrices (English)
0 references
16 August 2018
0 references
Random matrix theory is a thriving area of research that attracts considerable attention due to its importance in various branches of mathematics such as probability theory, quantum information theory and asymptotic geometric analysis, just to mention a few. In this paper, the author extends a result by \textit{A. Ambainis} et al. [Commun. Math. Phys. 310, No. 1, 25--74 (2012; Zbl 1243.15021)] and, in addition, proves a central limit theorem for linear eigenvalue statistics. More precisely, let \(k,m,n\in\mathbb N\) and consider the class \[ \mathcal M_{n,m,k}(Y) = \sum_{\alpha=1}^m \tau_\alpha Y_\alpha Y^T_\alpha,\quad Y_\alpha := Y^{(1)}_\alpha \otimes\cdots\otimes Y^{(k)}_\alpha \] of \(n^k\times n^k\) random matrices, where \(\tau_\alpha\in\mathbb R\) (with \(\alpha\in\{1,\dots,m\}\)) and \(Y^{(j)}_\alpha\) (with \(\alpha\in\{1,\dots,m\}\), \(j\in\{1,\dots,k\}\)) are independent and identically distributed copies of a normalized isotropic random vector \(Y\in\mathbb R^n\). For fixed \(k\in\mathbb N\), if the normalized counting measures of \(\{\tau_\alpha\}_\alpha\) converge weakly (as \(m,n\to\infty\) and \(m/n^k\to c\in[0,\infty)\)) and \(Y\) is a good vector (see Definition 1.1 in the paper), then the normalized counting measures of eigenvalues of \(\mathcal M_{n,m,k}(Y)\) converge weakly in probability to a non-random limiting distribution given by the famous Marchenko-Pastur law. For the case \(k=2\) (and a suitable subclass of good vectors \(Y\)), the author then proves the already mentioned central limit theorem for the centered linear eigenvalue statistics.
0 references
random matrix
0 references
sample covariance matrix
0 references
central limit theorem
0 references
linear eigenvalue statistics
0 references