The smallest singular value of a shifted random matrix (Q6071180)

From MaRDI portal
scientific article; zbMATH DE number 7768812
Language Label Description Also known as
English
The smallest singular value of a shifted random matrix
scientific article; zbMATH DE number 7768812

    Statements

    The smallest singular value of a shifted random matrix (English)
    0 references
    0 references
    21 November 2023
    0 references
    If \(A\) is an \(n\times n\) random matrix whose entries are independent and identically distributed subgaussian random variables with mean zero and unit variance, then in [\textit{M. Rudelson} and \textit{R. Vershynin}, Adv. Math. 218, No. 2, 600--633 (2008; Zbl 1139.15015)], the following estimate for the smallest singular value of \(A\) was given: \(\mathbb{P}(s_{\mathrm{min}}(A) \leq \varepsilon n^{-1/2}) \leq C\varepsilon + c^{n}, \varepsilon\geq 0\). Here \(c\in (0,1)\) is an absolute constant. On the other hand, the smoothed analysis of the smallest singular value of random matrices is a relevant topic. In [\textit{D. A. Spielman} and \textit{S.-H. Teng}, J. ACM 51, No. 3, 385--463 (2004; Zbl 1192.90120)], it was raised why the simplex algorithm is often successful in practice. In this context, the running time of the algorithm is related to the condition number of some input matrix \(M\). Even if the input matrix \(M\) is ill-conditioned, the computer will actually work with some small perturbation \(M + R\) of \(M\), where \(R\) is a random matrix, and \(M + R\) is usually well conditioned, making the algorithm successful. This fact justifies the importance of take into account the estimate of the smallest singular value of \(M + R\). With this motivation, in [SIAM J. Matrix Anal. Appl. 28, No. 2, 446--476 (2006; Zbl 1179.65033)], \textit{A. Sankar} et al. proved that if \(R\) is a random matrix with i. i. d. standard Gaussian entries, then for any deterministic matrix \(M\) of size \(n\times n\), you get the following estimate \(\mathbb{P}(s_{\mathrm{min}}(R + M) \leq \varepsilon) \leq C\varepsilon n^{1/2}\). Let \(R_{n}(\omega)\) be an \(n \times n\) matrix whose entries are independent copies of a centered subgaussian real random variable of unit variance and let \(M\) be an arbitrary deterministic matrix such that \(||M|| \leq n^{\gamma}\). In the paper under review, the author focuses the attention on an estimate for the smallest singular value of the matrix \(A_{n}(\omega) = R_{n}(\omega) + M\). Indeed, it is proved that there exist constants \(c_{1} > 0, c_{2}> 0\), and a positive integer number \(n_{0}\) such that for any \(\gamma\) with \(1/2 < \gamma < 1\), and \(n > n_{0}\) you get \(\mathbb{P}(s_{\mathrm{min}}(A_{n})\leq \varepsilon n^{-\gamma}) \leq c_{2}(n^{\gamma - 1/2} \varepsilon + \exp(-c_{1} n^{2-2\gamma}))\). The tools of the proof are based on the application of a new version of the least common denominator (LCD, in short) method for a truncated part of the vector on the sphere rather than the whole vector. In order to construct an epsilon net of all vectors on the sphere with small truncated LCD, the vector is divided in two parts, one with large support and possible small norm (the so called residual part) and the other one with small support and possible large norm (the so-called sparse part). In such a way, the epsilon nets associated with those parts are constructed. Notice that dealing with matrices with i. i. d. subgaussian entries the above statement constitutes an improvement of a result in [\textit{T. Tao} and \textit{V. Vu}, Math. Comput. 79, No. 272, 2333--2352 (2010; Zbl 1253.65067)], despite the fact that it applies to a broader class of matrices.
    0 references
    random matrices
    0 references
    smallest singular value
    0 references
    smoothed analysis
    0 references
    Littlewood-Offord problem
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references