A new look at Bergström's theorem on convergence in distribution for sums of dependent random variables (Q791980)

From MaRDI portal
Revision as of 11:04, 30 January 2024 by Import240129110113 (talk | contribs) (Added link to MaRDI item.)
scientific article
Language Label Description Also known as
English
A new look at Bergström's theorem on convergence in distribution for sums of dependent random variables
scientific article

    Statements

    A new look at Bergström's theorem on convergence in distribution for sums of dependent random variables (English)
    0 references
    0 references
    1984
    0 references
    In the paper Period. Math. Hung. 2, 173-190 (1972; Zbl 0252.60009), the reviewer used a comparison method to prove a limit theorem for sums of \(\phi\)-mixing random variables. The author of the paper under review claims that there is a gap in my proof. By an analogous comparison but dealing with characteristic functions instead of Gaussian transforms, used by me, he proves that the theorem is true. Now he has misunderstood my definition of a certain associated sequence of random variables. To a given truncated sequence \(\{\xi_{j}^{(n)}(\epsilon)\}^{k_ n}_{j=1}\) of random variables I have associated a sequence \(\{\zeta_ j^{(n)}(\epsilon)\}^{k_ n}_{j=1}\) such that \(E[\zeta_ j^{(n)}(\epsilon)\zeta^{(n)}_{j-\nu}(\epsilon)]=E[\xi_ j^{(n)}(\epsilon)\xi^{(n)}_{j+\nu}(\epsilon)]\) for 0\(\leq \nu \leq k\), \(j+\nu \leq k_ n\) and \(E[\zeta_ j^{(n)}(\epsilon)\zeta^{(j)}_{j-\nu}(\epsilon)]=E[\xi_ j^{(n)}(\epsilon)\xi^{(n)}_{j-\nu}(\epsilon)]\) otherwise where k is a given positive integer. Hence in the first relation it is required that also \(j\geq k\) and ''otherwise'' includes the case \(j<k\). It is also required that the vector \((\zeta_ 1^{(n)}(\epsilon),...,\zeta^{(n)}_{k_ n}(\epsilon))\) is Gaussian. The present author correctly points out that such a vector may not exist since its covariance matrix may not be positive definite. However I consider arbitrarily large k and it follows by the \({\mathcal I}\)-condition and my condition \(C_ 5\) that the covariances to \(E[\xi_ j^{(n)}(\epsilon)\xi^{(n)}_{j+\nu}(\epsilon)]\) converge to 0 uniformly with respect to j and \(\nu\) as \(n\to \infty\). Hence we may assume that the covariance matrix is positive definite for sufficiently large k and \(n>k\). The author has chosen the Gaussian vector \((\zeta_ 1^{(n)}(\epsilon),...\zeta_{k_ n}^{(n)}(\epsilon))\) in the more natural way \(E[\zeta_ i^{(n)}(\epsilon)\zeta_ j^{(n)}(\epsilon)]=E[\xi_ i^{(n)}(\epsilon)\xi_ j^{(n)}(\epsilon)]\) for all i and j. This choice is also possible for my comparison.
    0 references
    comparison method
    0 references
    sums of \(\phi\)-mixing random variables
    0 references

    Identifiers