Toward a unified theory of sparse dimensionality reduction in Euclidean space (Q496171): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Importer (talk | contribs)
Changed an Item
Property / review text
 
The authors consider certain random matrices, the so-called sparse Johnson-Lindenstrauss transforms, \(\Phi\in \mathbb{R}^{m\times n}\) with \(s\) non-zero entries in each column. For a given subset \(T\) of the unit sphere of \(\mathbb{R}^n\) and \(0<\varepsilon<1/2\), one is interested in finding conditions on \(m\) and \(s\) such that \(\Phi\) is \((1+\varepsilon)\)-isometric on \(T\) (in the mean), i.e., \[ \mathbb{E}\sup_{x\in T}|\|\Phi x\|_2^2-1|<\varepsilon. \tag{1} \] For this purpose, the authors introduce a certain parameter \(\kappa_{s,m}(T)\) of the set \(T\). Their main theorem provides sufficient conditions on \(m\) and \(s\) for inequality ({1}) to hold in terms of \(\kappa_{s,m}(T)\). After the introduction (Section 1) and establishing the necessary preliminaries (Section 2), the authors give a short overview of the ideas for the proof of the main theorem (Section 3). The fourth section concerns the special case \(T=E\cap S^{n-1}\) for a subspace \(E\subset \mathbb{R}^n\). In Section 5, the authors consider another special case, namely that the seminorm \(y\mapsto \sup_{x\in T}|\langle x,y\rangle|\) induced by \(T\) has a small type-\(2\) constant. All this serves as a warmup for the proof of the general theorem, which is given in Section 7. In between, in Section 6, the authors consider some applications to constrained least squares programs. Several more applications are presented in Section 8 (obtained from the main theorem for special choices of \(T\)), which are relevant, for example, for compressed sensing, numerical linear algebra or manifold learning. In all applications, the authors qualitatively recover or even improve the previously known results (in some cases, no nontrivial results were known before). The last section contains a discussion of possible directions for future research. Three appendices, containing necessary material from probability theory, convex analysis, and least squares programs, are also provided.
Property / review text: The authors consider certain random matrices, the so-called sparse Johnson-Lindenstrauss transforms, \(\Phi\in \mathbb{R}^{m\times n}\) with \(s\) non-zero entries in each column. For a given subset \(T\) of the unit sphere of \(\mathbb{R}^n\) and \(0<\varepsilon<1/2\), one is interested in finding conditions on \(m\) and \(s\) such that \(\Phi\) is \((1+\varepsilon)\)-isometric on \(T\) (in the mean), i.e., \[ \mathbb{E}\sup_{x\in T}|\|\Phi x\|_2^2-1|<\varepsilon. \tag{1} \] For this purpose, the authors introduce a certain parameter \(\kappa_{s,m}(T)\) of the set \(T\). Their main theorem provides sufficient conditions on \(m\) and \(s\) for inequality ({1}) to hold in terms of \(\kappa_{s,m}(T)\). After the introduction (Section 1) and establishing the necessary preliminaries (Section 2), the authors give a short overview of the ideas for the proof of the main theorem (Section 3). The fourth section concerns the special case \(T=E\cap S^{n-1}\) for a subspace \(E\subset \mathbb{R}^n\). In Section 5, the authors consider another special case, namely that the seminorm \(y\mapsto \sup_{x\in T}|\langle x,y\rangle|\) induced by \(T\) has a small type-\(2\) constant. All this serves as a warmup for the proof of the general theorem, which is given in Section 7. In between, in Section 6, the authors consider some applications to constrained least squares programs. Several more applications are presented in Section 8 (obtained from the main theorem for special choices of \(T\)), which are relevant, for example, for compressed sensing, numerical linear algebra or manifold learning. In all applications, the authors qualitatively recover or even improve the previously known results (in some cases, no nontrivial results were known before). The last section contains a discussion of possible directions for future research. Three appendices, containing necessary material from probability theory, convex analysis, and least squares programs, are also provided. / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Jan-David Hardtke / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 46B07 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 15B52 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 90C25 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 65F50 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 6483695 / rank
 
Normal rank
Property / zbMATH Keywords
 
sparse Johnson-Lindenstrauss transform
Property / zbMATH Keywords: sparse Johnson-Lindenstrauss transform / rank
 
Normal rank
Property / zbMATH Keywords
 
random matrices
Property / zbMATH Keywords: random matrices / rank
 
Normal rank
Property / zbMATH Keywords
 
constrained least squares programs
Property / zbMATH Keywords: constrained least squares programs / rank
 
Normal rank
Property / zbMATH Keywords
 
compressed sensing
Property / zbMATH Keywords: compressed sensing / rank
 
Normal rank
Property / zbMATH Keywords
 
manifold learning
Property / zbMATH Keywords: manifold learning / rank
 
Normal rank
Property / zbMATH Keywords
 
numerical linear algebra
Property / zbMATH Keywords: numerical linear algebra / rank
 
Normal rank

Revision as of 23:06, 30 June 2023

scientific article
Language Label Description Also known as
English
Toward a unified theory of sparse dimensionality reduction in Euclidean space
scientific article

    Statements

    Toward a unified theory of sparse dimensionality reduction in Euclidean space (English)
    0 references
    0 references
    0 references
    0 references
    21 September 2015
    0 references
    The authors consider certain random matrices, the so-called sparse Johnson-Lindenstrauss transforms, \(\Phi\in \mathbb{R}^{m\times n}\) with \(s\) non-zero entries in each column. For a given subset \(T\) of the unit sphere of \(\mathbb{R}^n\) and \(0<\varepsilon<1/2\), one is interested in finding conditions on \(m\) and \(s\) such that \(\Phi\) is \((1+\varepsilon)\)-isometric on \(T\) (in the mean), i.e., \[ \mathbb{E}\sup_{x\in T}|\|\Phi x\|_2^2-1|<\varepsilon. \tag{1} \] For this purpose, the authors introduce a certain parameter \(\kappa_{s,m}(T)\) of the set \(T\). Their main theorem provides sufficient conditions on \(m\) and \(s\) for inequality ({1}) to hold in terms of \(\kappa_{s,m}(T)\). After the introduction (Section 1) and establishing the necessary preliminaries (Section 2), the authors give a short overview of the ideas for the proof of the main theorem (Section 3). The fourth section concerns the special case \(T=E\cap S^{n-1}\) for a subspace \(E\subset \mathbb{R}^n\). In Section 5, the authors consider another special case, namely that the seminorm \(y\mapsto \sup_{x\in T}|\langle x,y\rangle|\) induced by \(T\) has a small type-\(2\) constant. All this serves as a warmup for the proof of the general theorem, which is given in Section 7. In between, in Section 6, the authors consider some applications to constrained least squares programs. Several more applications are presented in Section 8 (obtained from the main theorem for special choices of \(T\)), which are relevant, for example, for compressed sensing, numerical linear algebra or manifold learning. In all applications, the authors qualitatively recover or even improve the previously known results (in some cases, no nontrivial results were known before). The last section contains a discussion of possible directions for future research. Three appendices, containing necessary material from probability theory, convex analysis, and least squares programs, are also provided.
    0 references
    sparse Johnson-Lindenstrauss transform
    0 references
    random matrices
    0 references
    constrained least squares programs
    0 references
    compressed sensing
    0 references
    manifold learning
    0 references
    numerical linear algebra
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references