Empirical processes and random projections (Q2566243)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Empirical processes and random projections |
scientific article |
Statements
Empirical processes and random projections (English)
0 references
22 September 2005
0 references
In the introduction the authors explain that the goal of this article is to present bounds on the supremum of some empirical processes and to use these bounds in certain geometric problems. The motivation of this study was the following problem: Let \(X_1,\dots, X_k\) be independent random vectors in \(\mathbb{R}^n\) and let \(\Gamma: \mathbb{R}^n\to \mathbb{R}^k\) be the random operator \(\Gamma v= \sum^k_{i=1} (X_i, v) e_i\), where \(\{e_1,\dots, e_k\}\) is the standard orthonormal basis in \(\mathbb{R}^k\). Assume that \(\mathbb{E}(X_i, t)^2= 1\) for any \(1\leq i\leq k\) and any \(t\in S^{n-1}\), where \(S^{n-1}\) represents the Euclidean unit sphere in \(\mathbb{R}^n\). Given a subset \(T\subset S^{n-1}\), the problem is whether the (random) operator \({1\over\sqrt{k}}\Gamma: \ell^n_2\to \ell^k_2\) almost preserves the norm of all elements of \(T\). To ensure that \(\Gamma\) is almost norm preserving on \(T\) it suffices to analyze the supremum of the random variables \(Z^k_t= {1\over k}\| \Gamma_t\|^2_{\ell^k_2}\to 1\), and to show that the positive probability \(\sup_{t\in T}|Z^k_t|\) is sufficiently small. An important example is when the random vectors \(X= (\xi^i_1,\dots, \xi^i_n)\) and \((\xi^i_j)^n_{i,j=1}\) are independent random variables with expectation \(0\), variance 1 and have a sub-Gaussian tail, and thus \(\Gamma\) is a random \(k\times n\) sub-Gaussian matrix. A standard concentration argument applied in each \(Z^k_t\) individually shows that \(|T|= n\) and if \(k\geq c{\log n\over\varepsilon^2}\), then with high probability for every \(t\in T\), \[ 1- \varepsilon\leq {1\over\sqrt{2}} \|\Gamma t\|_{\ell^k_2}\leq 1+\varepsilon, \] and thus almost preserves the norm of all the elements in \(T\). This fact is the basis of the celebrated Johnson-Lindenstrauss lemma [\textit{W. B. Johnson} and \textit{J. Lindenstrauss}, Contemp. Math. 26, 189--206 (1984; Zbl 0539.46017)], which studies the ability to almost isometrically embed of \(\ell_2\) in Hilbert space of a much smaller dimension; that is, for \(A\subset\ell_2\), to find a mapping \(f: T\to \ell^k_2\) such that for any \(s,t\in A\), \[ (1-\varepsilon)\| s- t\|_{t_2}\leq \| f(s)- f(t)\|_{\ell^k_2}\leq (1+\varepsilon)\| s-t\|_{\ell_2}. \] Johnson and Lindenstrauss proved the following Theorem. There exists an absolute constant \(c\) for which the following holds. If \(A\subset\ell_2\), \(|A|= n\) and \(k= c{\log n\over\varepsilon^2}\), there exists an \(\varepsilon\)-isometry \(f: \ell_2\to \ell^k_2\). But the ``complexity parameter'' for the set of normalized differences \(T\) in the Johnson-Linden\-strauss lemma is rather unsatisfying the logarithm of the cardinality of the set \(T\). Definition. For a metric space \((T,d)\) define \(\gamma_\alpha(T,d)=\inf\sup_{t\in T} \sum^\infty_{s=0} 2^{s/\alpha} d(t,T_s)\). If a process is indexed by a set of functions with the same \(L_2\) norm, then with non-zero probability the \(\gamma_1\) part of the bound can be removed. The second part of this paper is devoted to the generic chaining and the last part treats the empirical processes.
0 references
Random projections
0 references
Generic chaining
0 references
Empirical processes
0 references