Monotonicity of the mean distance for empirical dependent Gaussian samples (Q1124246)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Monotonicity of the mean distance for empirical dependent Gaussian samples |
scientific article; zbMATH DE number 4111828
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | Monotonicity of the mean distance for empirical dependent Gaussian samples |
scientific article; zbMATH DE number 4111828 |
Statements
Monotonicity of the mean distance for empirical dependent Gaussian samples (English)
0 references
1988
0 references
Let X, Y be i.i.d. symmetric Gaussian random vectors with values in \(R^ n\) and covariance matrix K. Denote \(\phi (x,y)=\min_{\sigma}\| x- \sigma y\|_ 1\), where \(x,y\in R^ n\), \(\| \cdot \|_ 1\) is the \(\ell_ 1\)-norm and \(\sigma\) runs over the group of all permutations. The following theorem is investigated. If K is bounded from above by the identity matrix I then the expectation E \(\phi\) (X,Y) attains its maximum for \(K=I.\) This theorem was announced in the author's paper, ibid. 142, 164-166 (1986; see the preceding review, Zbl 0678.62055), but there was a gap in the proof. Here, a corrected version of the proof is sketched.
0 references
empirical measures
0 references
Kantorovich metric
0 references
permutation group
0 references
symmetric Gaussian random vectors
0 references
0.7562758326530457
0 references
0.7309262752532959
0 references