Toward a unified theory of sparse dimensionality reduction in Euclidean space (Q496171): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
Importer (talk | contribs)
Changed an Item
Property / arXiv ID
 
Property / arXiv ID: 1311.2542 / rank
 
Normal rank

Revision as of 14:24, 18 April 2024

scientific article
Language Label Description Also known as
English
Toward a unified theory of sparse dimensionality reduction in Euclidean space
scientific article

    Statements

    Toward a unified theory of sparse dimensionality reduction in Euclidean space (English)
    0 references
    0 references
    0 references
    0 references
    21 September 2015
    0 references
    The authors consider certain random matrices, the so-called sparse Johnson-Lindenstrauss transforms, \(\Phi\in \mathbb{R}^{m\times n}\) with \(s\) non-zero entries in each column. For a given subset \(T\) of the unit sphere of \(\mathbb{R}^n\) and \(0<\varepsilon<1/2\), one is interested in finding conditions on \(m\) and \(s\) such that \(\Phi\) is \((1+\varepsilon)\)-isometric on \(T\) (in the mean), i.e., \[ \mathbb{E}\sup_{x\in T}|\|\Phi x\|_2^2-1|<\varepsilon. \tag{1} \] For this purpose, the authors introduce a certain parameter \(\kappa_{s,m}(T)\) of the set \(T\). Their main theorem provides sufficient conditions on \(m\) and \(s\) for inequality ({1}) to hold in terms of \(\kappa_{s,m}(T)\). After the introduction (Section 1) and establishing the necessary preliminaries (Section 2), the authors give a short overview of the ideas for the proof of the main theorem (Section 3). The fourth section concerns the special case \(T=E\cap S^{n-1}\) for a subspace \(E\subset \mathbb{R}^n\). In Section 5, the authors consider another special case, namely that the seminorm \(y\mapsto \sup_{x\in T}|\langle x,y\rangle|\) induced by \(T\) has a small type-\(2\) constant. All this serves as a warmup for the proof of the general theorem, which is given in Section 7. In between, in Section 6, the authors consider some applications to constrained least squares programs. Several more applications are presented in Section 8 (obtained from the main theorem for special choices of \(T\)), which are relevant, for example, for compressed sensing, numerical linear algebra or manifold learning. In all applications, the authors qualitatively recover or even improve the previously known results (in some cases, no nontrivial results were known before). The last section contains a discussion of possible directions for future research. Three appendices, containing necessary material from probability theory, convex analysis, and least squares programs, are also provided.
    0 references
    sparse Johnson-Lindenstrauss transform
    0 references
    random matrices
    0 references
    constrained least squares programs
    0 references
    compressed sensing
    0 references
    manifold learning
    0 references
    numerical linear algebra
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references