Bootstrap and randomization tests of some nonparametric hypotheses (Q1263902)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Bootstrap and randomization tests of some nonparametric hypotheses
scientific article

    Statements

    Bootstrap and randomization tests of some nonparametric hypotheses (English)
    0 references
    0 references
    1989
    0 references
    In this important paper, the asymptotic behaviour of some nonparametric tests is studied in situations where both bootstrap tests and randomization tests are applicable. Under fairly general conditions, it is shown that the tests are asymptotically equivalent in the sense that the resulting critical values and power functions converge to the same limits. Considered examples are: testing independence, testing for spherical symmetry, testing for exchangeability, testing for homogeneity and testing for a change point. The statistical problem has the following form. Given a sample \(X_ 1,...,X_ n\) of S-valued random variables with unknown probability distribution P on S, let us consider the null hypothesis \(P\in \Omega_ 0\) versus \(P\in \Omega -\Omega_ 0\) where \(\Omega\) represents the class of all probability distributions on S. Writing \(\Omega_ 0\) as the set of probabilities P satisfying \(\tau P=P\) for some mapping \(\tau\) the studied test statistics is written in the form: \[ T_ n=n^{1/2}\sup \{| P_ n(V)-\tau P_ n(V)|:\quad V\in {\mathcal V}\} \] where \(P_ n\) is the empirical measure of \(X_ 1,...,X_ n\) and \({\mathcal V}\) is a Vapnik-Cervonenkis class. Both bootstrap and randomization tests reject for large values of \(T_ n\). The difference is that critical values are determined approximately in one case resampling the empirical measure \(P_ n\) and in the other case using the fact that the distribution of \(X_ 1,...,X_ n\) is invariant under a group \(G_ n\) of transformations. The results obtained by the author can be summarized as follows. Let \(X_ 1,...,X_ n\) be i.i.d. with \(P\in \Omega_ 0\), \(J_ n(t,P)\) the distribution function of \(T_ n\) and \(J_ n(T,P_ n)\), \(J_ n(t,P_ 0| G_ n)\) be the bootstrap and the randomization distributions of \(T_ n\), respectively. Then it is proved that \[ \sup \{| J_ n(t,P_ n)-J_ n(t,P| G_ n)|:\quad t\}\to 0 \] in probability. Moreover, there exists a strictly increasing continuous distribution, say J(t,p) which is not random and depends only on P, such that, \[ \sup \{| J_ n(t,P_ n)-J(t,P)|: t\}\to 0\quad in\quad probability,\quad \sup \{| J_ n(t,P)-J(t,P)|: t\}\to 0. \] Thus the difference in corresponding critical values tends to 0 in probability and analogous results hold for the power functions of the tests. This methodology is applied to the cases referred above. This work is an important contribution to the understanding of the bootstrap, the randomization and their interrelationships.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    rotational invariance
    0 references
    exchangeability
    0 references
    asymptotic equivalence
    0 references
    bootstrap tests
    0 references
    randomization tests
    0 references
    critical values
    0 references
    power functions
    0 references
    testing independence
    0 references
    testing for spherical symmetry
    0 references
    testing for exchangeability
    0 references
    testing for homogeneity
    0 references
    testing for a change point
    0 references
    empirical measure
    0 references
    Vapnik-Cervonenkis class
    0 references
    0 references